[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180312090827.GB23903@ming.t460p>
Date: Mon, 12 Mar 2018 17:09:13 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Keith Busch <keith.busch@...el.com>
Cc: Christoph Hellwig <hch@....de>, sagi@...mberg.me,
linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
axboe@...com, Jianchao Wang <jianchao.w.wang@...cle.com>,
linux-block@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and
ioq0
On Fri, Mar 09, 2018 at 10:24:45AM -0700, Keith Busch wrote:
> On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote:
> >
> > So I suspect we'll need to go with a patch like this, just with a way
> > better changelog.
>
> I have to agree this is required for that use case. I'll run some
> quick tests and propose an alternate changelog.
>
> Longer term, the current way we're including offline present cpus either
> (a) has the driver allocate resources it can't use or (b) spreads the
> ones it can use thinner than they need to be. Why don't we rerun the
> irq spread under a hot cpu notifier for only online CPUs?
4b855ad371 ("blk-mq: Create hctx for each present CPU") removes handling
mapping change via hot cpu notifier. Not only code is cleaned up, but
also fixes very complicated queue dependency issue:
- loop/dm-rq queue depends on underlying queue
- for NVMe, IO queue depends on admin queue
If freezing queue can be avoided in CPU notifier, it should be fine to
do that, otherwise it need to be avoided.
Thanks,
Ming
Powered by blists - more mailing lists