lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1528906474.2289.155.camel@codethink.co.uk>
Date:   Wed, 13 Jun 2018 17:14:34 +0100
From:   Ben Hutchings <ben.hutchings@...ethink.co.uk>
To:     Jianchao Wang <jianchao.w.wang@...cle.com>,
        Keith Busch <keith.busch@...el.com>
Cc:     stable@...r.kernel.org,
        Sasha Levin <alexander.levin@...rosoft.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4.4 110/268] nvme-pci: Fix nvme queue cleanup if IRQ
 setup fails

On Mon, 2018-05-28 at 12:01 +0200, Greg Kroah-Hartman wrote:
> 4.4-stable review patch.  If anyone has any objections, please let me know.
> 
> ------------------
> 
> From: Jianchao Wang <jianchao.w.wang@...cle.com>
> 
> [ Upstream commit f25a2dfc20e3a3ed8fe6618c331799dd7bd01190 ]
> 
> This patch fixes nvme queue cleanup if requesting an IRQ handler for
> the queue's vector fails. It does this by resetting the cq_vector to
> the uninitialized value of -1 so it is ignored for a controller reset.
> 
> Signed-off-by: Jianchao Wang <jianchao.w.wang@...cle.com>
> [changelog updates, removed misc whitespace changes]
> Signed-off-by: Keith Busch <keith.busch@...el.com>
> Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
> ---
>  drivers/nvme/host/pci.c |    5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1583,7 +1583,7 @@ static int nvme_create_queue(struct nvme
>  	nvmeq->cq_vector = qid - 1;
>  	result = adapter_alloc_cq(dev, qid, nvmeq);
>  	if (result < 0)
> -		return result;
> +		goto release_vector;
>  
>  	result = adapter_alloc_sq(dev, qid, nvmeq);
>  	if (result < 0)
> @@ -1597,9 +1597,12 @@ static int nvme_create_queue(struct nvme
>  	return result;
>  
>   release_sq:
> +	dev->online_queues--;

This addition looks wrong.  dev->online_queues is incremented by
nvme_init_queue(), but this function only calls that at a point where
it is sure to succeed.  So why would a failure path need to decrement
it?

Ben.

>  	adapter_delete_sq(dev, qid);
>   release_cq:
>  	adapter_delete_cq(dev, qid);
> + release_vector:
> +	nvmeq->cq_vector = -1;
>  	return result;
>  }
>  
> 
> 
> 
-- 
Ben Hutchings, Software Developer                         Codethink Ltd
https://www.codethink.co.uk/                 Dale House, 35 Dale Street
                                     Manchester, M1 2HF, United Kingdom

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ