[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878qyt7b65.ffs@tglx>
Date: Tue, 25 Jun 2024 09:07:30 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Hannes Reinecke <hare@...e.de>, Daniel Wagner <dwagner@...e.de>,
Christoph Hellwig <hch@....de>
Cc: Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>, Sagi
Grimberg <sagi@...mberg.me>, Frederic Weisbecker <fweisbecker@...e.com>,
Mel Gorman <mgorman@...e.de>, Sridhar Balaraman
<sbalaraman@...allelwireless.com>, "brookxu.cn" <brookxu.cn@...il.com>,
Ming Lei <ming.lei@...hat.com>, linux-kernel@...r.kernel.org,
linux-block@...r.kernel.org, linux-nvme@...ts.infradead.org
Subject: Re: [PATCH 1/3] sched/isolation: Add io_queue housekeeping option
On Tue, Jun 25 2024 at 08:37, Hannes Reinecke wrote:
> On 6/24/24 11:00, Daniel Wagner wrote:
>> On Mon, Jun 24, 2024 at 10:47:05AM GMT, Christoph Hellwig wrote:
>>>> Do you think we should introduce a new type or just use the existing
>>>> managed_irq for this?
>>>
>>> No idea really. What was the reason for adding a new one?
>>
>> I've added the new type so that the current behavior of spreading the
>> queues over to the isolated CPUs is still possible. I don't know if this
>> a valid use case or not. I just didn't wanted to kill this feature it
>> without having discussed it before.
>>
>> But if we agree this doesn't really makes sense with isolcpus, then I
>> think we should use the managed_irq one as nvme-pci is using the managed
>> IRQ API.
>>
> I'm in favour in expanding/modifying the managed irq case.
> For managed irqs the driver will be running on the housekeeping CPUs
> only, and has no way of even installing irq handlers for the isolcpus.
Yes, that's preferred, but please double check with the people who
introduced that in the first place.
Thanks,
tglx
Powered by blists - more mailing lists