[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20140321201109.GC5705@linux.intel.com>
Date: Fri, 21 Mar 2014 16:11:09 -0400
From: Matthew Wilcox <willy@...ux.intel.com>
To: liaohengquan1986 <liaohengquan1986@....com>
Cc: "Alexander Gordeev" <agordeev@...hat.com>,
linux-kernel@...r.kernel.org,
"Keith Busch" <keith.busch@...el.com>,
linux-nvme@...ts.infradead.org
Subject: Re: A question about NVMe's nvme-irq
On Fri, Mar 21, 2014 at 11:29:02AM +0800, liaohengquan1986 wrote:
> hello,
> There is question confusing me recently. In the function of nvme-irq as belows:
> static irqreturn_t nvme_irq(int irq, void *data)
> {
> irqreturn_t result;
> struct nvme_queue *nvmeq = data;
> spin_lock(&nvmeq->q_lock);
> nvme_process_cq(nvmeq);
> result = nvmeq->cqe_seen ? IRQ_HANDLED : IRQ_NONE;
> nvmeq->cqe_seen = 0;
> spin_unlock(&nvmeq->q_lock);
> return result;
> }
> If there are two cqes which trigger two irqs, but they are so closed that the first nvme-irq() handles both the cqes( including the second cqe which triggers the second irq),
> then the second nvme_process_cq() will find there is no cqe in the CQ and return nvmeq->cqe_seen = 0, and nvme-irq will return IRQ_NONE.
> I think maybe this is a bug, because there actually are two irqs, it's not right to return IRQ_NONE, isn't it?
/* If the controller ignores the cq head doorbell and continuously
* writes to the queue, it is theoretically possible to wrap around
* the queue twice and mistakenly return IRQ_NONE. Linux only
* requires that 0.1% of your interrupts are handled, so this isn't
* a big problem.
*/
I should probably update & move that comment, but nevertheless, it
applies to your situation too.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists