[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180313174052.GJ18494@localhost.localdomain>
Date: Tue, 13 Mar 2018 11:40:53 -0600
From: Keith Busch <keith.busch@...el.com>
To: Ming Lei <ming.lei@...hat.com>
Cc: Jianchao Wang <jianchao.w.wang@...cle.com>, axboe@...com,
hch@....de, sagi@...mberg.me, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH V3] nvme-pci: assign separate irq vectors for adminq and
ioq1
On Tue, Mar 13, 2018 at 06:45:00PM +0800, Ming Lei wrote:
> On Tue, Mar 13, 2018 at 05:58:08PM +0800, Jianchao Wang wrote:
> > Currently, adminq and ioq1 share the same irq vector which is set
> > affinity to cpu0. If a system allows cpu0 to be offlined, the adminq
> > will not be able work any more.
> >
> > To fix this, assign separate irq vectors for adminq and ioq1. Set
> > .pre_vectors == 1 when allocate irq vectors, then assign the first
> > one to adminq which will have affinity cpumask with all possible
> > cpus. On the other hand, if controller has only legacy or single
> > -message MSI, we will setup adminq and 1 ioq and let them share
> > the only one irq vector.
> >
> > Signed-off-by: Jianchao Wang <jianchao.w.wang@...cle.com>
>
> Reviewed-by: Ming Lei <ming.lei@...hat.com>
Thanks, applied with an updated changelog.
Not being able to use the admin queue is a pretty big deal, so it's pushed
to the next nvme 4.16-rc branch. This may even be good stable material.
Powered by blists - more mailing lists