[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180308074220.GC15748@lst.de>
Date: Thu, 8 Mar 2018 08:42:20 +0100
From: Christoph Hellwig <hch@....de>
To: Keith Busch <keith.busch@...el.com>
Cc: Ming Lei <ming.lei@...hat.com>, Christoph Hellwig <hch@....de>,
Jianchao Wang <jianchao.w.wang@...cle.com>, axboe@...com,
sagi@...mberg.me, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org
Subject: Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq
and ioq0
On Thu, Mar 01, 2018 at 09:10:42AM -0700, Keith Busch wrote:
> On Thu, Mar 01, 2018 at 11:03:30PM +0800, Ming Lei wrote:
> > If all CPUs for the 1st IRQ vector of admin queue are offline, then I
> > guess NVMe can't work any more.
>
> Yikes, with respect to admin commands, it appears you're right if your
> system allows offlining CPU0.
>
> > So looks it is a good idea to make admin queue's IRQ vector assigned as
> > non-managed IRQs.
>
> It'd still be considered managed even if it's a 'pre_vector', though
> it would get the default mask with all possible CPUs.
Which basically does the right thing. So I suspect we'll need to
go with a patch like this, just with a way better changelog.
Powered by blists - more mailing lists