[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180516144240.GA20223@localhost.localdomain>
Date: Wed, 16 May 2018 08:42:40 -0600
From: Keith Busch <keith.busch@...ux.intel.com>
To: Bharat Kumar Gogada <bharatku@...inx.com>
Cc: "linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"keith.busch@...el.com" <keith.busch@...el.com>,
"axboe@...com" <axboe@...com>, "hch@....de" <hch@....de>,
"sagi@...mberg.me" <sagi@...mberg.me>
Subject: Re: INTMS/INTMC not being used in NVME interrupt handling
On Wed, May 16, 2018 at 12:35:15PM +0000, Bharat Kumar Gogada wrote:
> Hi,
>
> As per NVME specification:
> 7.5.1.1 Host Software Interrupt Handling
> It is recommended that host software utilize the Interrupt Mask Set and Interrupt Mask Clear (INTMS/INTMC)
> registers to efficiently handle interrupts when configured to use pin based or MSI messages.
>
> In kernel 4.14, drivers/nvme/host/pci.c function nvme_isr
> doesn't use these registers.
>
> Any reason why these registers are not used in nvme interrupt handler ?
I think you've answered your own question: we process completions in the
interrupt context. The interrupt is already masked at the CPU level in
this context, so there should be no reason to mask them at the device
level.
> Why NVMe driver is not using any bottom half and processing all completion queues
> in interrupt handler ?
Performance.
Powered by blists - more mailing lists