[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55315fc9-4439-43b0-a4d2-89ab4ea598f0@suse.de>
Date: Tue, 25 Jun 2024 08:37:34 +0200
From: Hannes Reinecke <hare@...e.de>
To: Daniel Wagner <dwagner@...e.de>, Christoph Hellwig <hch@....de>
Cc: Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>,
Sagi Grimberg <sagi@...mberg.me>, Thomas Gleixner <tglx@...utronix.de>,
Frederic Weisbecker <fweisbecker@...e.com>, Mel Gorman <mgorman@...e.de>,
Sridhar Balaraman <sbalaraman@...allelwireless.com>,
"brookxu.cn" <brookxu.cn@...il.com>, Ming Lei <ming.lei@...hat.com>,
linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
linux-nvme@...ts.infradead.org
Subject: Re: [PATCH 1/3] sched/isolation: Add io_queue housekeeping option
On 6/24/24 11:00, Daniel Wagner wrote:
> On Mon, Jun 24, 2024 at 10:47:05AM GMT, Christoph Hellwig wrote:
>>> Do you think we should introduce a new type or just use the existing
>>> managed_irq for this?
>>
>> No idea really. What was the reason for adding a new one?
>
> I've added the new type so that the current behavior of spreading the
> queues over to the isolated CPUs is still possible. I don't know if this
> a valid use case or not. I just didn't wanted to kill this feature it
> without having discussed it before.
>
> But if we agree this doesn't really makes sense with isolcpus, then I
> think we should use the managed_irq one as nvme-pci is using the managed
> IRQ API.
>
I'm in favour in expanding/modifying the managed irq case.
For managed irqs the driver will be running on the housekeeping CPUs
only, and has no way of even installing irq handlers for the isolcpus.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@...e.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
Powered by blists - more mailing lists