[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LRH.2.03.1401211151110.25844@AMR>
Date: Tue, 21 Jan 2014 12:06:20 -0700 (MST)
From: Keith Busch <keith.busch@...el.com>
To: Alexander Gordeev <agordeev@...hat.com>
cc: Keith Busch <keith.busch@...el.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Matthew Wilcox <willy@...ux.intel.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
linux-nvme@...ts.infradead.org,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>
Subject: Re: [PATCH 2/2] nvme: Cleanup nvme_dev_start() and fix IRQ leak
On Tue, 21 Jan 2014, Alexander Gordeev wrote:
> This is an attempt to make handling of admin queue in a
> single scope. This update also fixes a IRQ leak in case
> nvme_setup_io_queues() failed to allocate enough iomem
> and bailed out with -ENOMEM errno.
>
> Signed-off-by: Alexander Gordeev <agordeev@...hat.com>
> ---
> +static void nvme_teardown_admin_queue(struct nvme_dev *dev)
> +{
> + nvme_disable_queue(dev, 0);
> + nvme_free_queue(dev->queues[0]);
> +}
> @@ -2402,11 +2398,20 @@ static int nvme_dev_start(struct nvme_dev *dev)
> list_add(&dev->node, &dev_list);
> spin_unlock(&dev_list_lock);
>
> - result = nvme_setup_io_queues(dev);
> - if (result && result != -EBUSY)
> + result = set_queue_count(dev, num_online_cpus());
> + if (result == -EBUSY)
> + return -EBUSY;
> +
> + nvme_teardown_admin_queue(dev);
Oh no! Your new teardown function is freeing the admin queue, but it
would be used immediatly after that in nvme_setup_io_queues ...
> +
> + if (result)
> goto disable;
... but you'll never actually get to setup io queues because the 'result'
here is non-zero if we were successful, and is the number of queues the
controller can allocate. I think you meant to do this instead:
+ if (result < 0)
>
> - return result;
> + result = nvme_setup_io_queues(dev, result);
> + if (result)
> + goto disable;
> +
> + return 0;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists