[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <sjna556zvxyyj6as5pk6bbgmdwoiybl4gubqnlrxyfdvhbnlma@ewka5cxxowr7>
Date: Mon, 24 Jun 2024 11:00:39 +0200
From: Daniel Wagner <dwagner@...e.de>
To: Christoph Hellwig <hch@....de>
Cc: Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>,
Sagi Grimberg <sagi@...mberg.me>, Thomas Gleixner <tglx@...utronix.de>,
Frederic Weisbecker <fweisbecker@...e.com>, Mel Gorman <mgorman@...e.de>, Hannes Reinecke <hare@...e.de>,
Sridhar Balaraman <sbalaraman@...allelwireless.com>, "brookxu.cn" <brookxu.cn@...il.com>,
Ming Lei <ming.lei@...hat.com>, linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
linux-nvme@...ts.infradead.org
Subject: Re: [PATCH 1/3] sched/isolation: Add io_queue housekeeping option
On Mon, Jun 24, 2024 at 10:47:05AM GMT, Christoph Hellwig wrote:
> > Do you think we should introduce a new type or just use the existing
> > managed_irq for this?
>
> No idea really. What was the reason for adding a new one?
I've added the new type so that the current behavior of spreading the
queues over to the isolated CPUs is still possible. I don't know if this
a valid use case or not. I just didn't wanted to kill this feature it
without having discussed it before.
But if we agree this doesn't really makes sense with isolcpus, then I
think we should use the managed_irq one as nvme-pci is using the managed
IRQ API.
Powered by blists - more mailing lists