[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66e4ad3e-4019-13ec-94c0-e168cc1d95b4@oracle.com>
Date: Thu, 1 Mar 2018 18:05:53 +0800
From: "jianchao.wang" <jianchao.w.wang@...cle.com>
To: Sagi Grimberg <sagi@...mberg.me>, Christoph Hellwig <hch@....de>
Cc: keith.busch@...el.com, axboe@...com, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org
Subject: Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and
ioq0
Hi sagi
Thanks for your kindly response.
On 03/01/2018 05:28 PM, Sagi Grimberg wrote:
>
>> Note that we originally allocates irqs this way, and Keith changed
>> it a while ago for good reasons. So I'd really like to see good
>> reasons for moving away from this, and some heuristics to figure
>> out which way to use. E.g. if the device supports more irqs than
>> I/O queues your scheme might always be fine.
>
> I still don't understand what this buys us in practice. Seems redundant
> to allocate another vector without any (even marginal) difference.
>
When the adminq is free, ioq0 irq completion path has to invoke nvme_irq twice, one for itself,
one for adminq completion irq action.
We are trying to save every cpu cycle across the nvme host path, why we waste nvme_irq cycles here.
If we have enough vectors, we could allocate another irq vector for adminq to avoid this.
Sincerely
Jianchao
Powered by blists - more mailing lists