[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b2ec4cb3-5125-3fce-4b6d-f483b75d0b2e@oracle.com>
Date: Fri, 2 Mar 2018 11:11:22 +0800
From: "jianchao.wang" <jianchao.w.wang@...cle.com>
To: Keith Busch <keith.busch@...el.com>
Cc: Sagi Grimberg <sagi@...mberg.me>, Christoph Hellwig <hch@....de>,
axboe@...com, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org
Subject: Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and
ioq0
Hi Keith
Thanks for your kindly directive and precious time for this.
On 03/01/2018 11:15 PM, Keith Busch wrote:
> On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote:
>> When the adminq is free, ioq0 irq completion path has to invoke nvme_irq twice, one for itself,
>> one for adminq completion irq action.
>
> Let's be a little more careful on the terminology when referring to spec
> defined features: there is no such thing as "ioq0". The IO queues start
> at 1. The admin queue is the '0' index queue.
Yes, indeed, sorry for my bad description.
>> We are trying to save every cpu cycle across the nvme host path, why we waste nvme_irq cycles here.
>> If we have enough vectors, we could allocate another irq vector for adminq to avoid this.
>
> Please understand the _overwhelming_ majority of time spent for IRQ
> handling is the context switches. There's a reason you're not able to
> measure a perf difference between IOQ1 and IOQ2: the number of CPU cycles
> to chain a second action is negligible.
>
Yes, indeed
Sincerely
Jianchao
Powered by blists - more mailing lists