lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 9 May 2018 18:06:46 +0300
From:   Sagi Grimberg <sagi@...mberg.me>
To:     Jianchao Wang <jianchao.w.wang@...cle.com>, keith.busch@...el.com,
        axboe@...com, hch@....de
Cc:     linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue



On 05/04/2018 11:02 AM, Jianchao Wang wrote:
> When nvme_init_identify in nvme_rdma_configure_admin_queue fails,
> the ctrl->queues[0] is freed but the NVME_RDMA_Q_LIVE is still set.
> If nvme_rdma_stop_queue is invoked, we will incur use-after-free
> which will cause memory corruption.
>   BUG: KASAN: use-after-free in rdma_disconnect+0x1f/0xe0 [rdma_cm]
>   Read of size 8 at addr ffff8801dc3969c0 by task kworker/u16:3/9304
> 
>   CPU: 3 PID: 9304 Comm: kworker/u16:3 Kdump: loaded Tainted: G        W         4.17.0-rc3+ #20
>   Workqueue: nvme-delete-wq nvme_delete_ctrl_work
>   Call Trace:
>    dump_stack+0x91/0xeb
>    print_address_description+0x6b/0x290
>    kasan_report+0x261/0x360
>    rdma_disconnect+0x1f/0xe0 [rdma_cm]
>    nvme_rdma_stop_queue+0x25/0x40 [nvme_rdma]
>    nvme_rdma_shutdown_ctrl+0xf3/0x150 [nvme_rdma]
>    nvme_delete_ctrl_work+0x98/0xe0
>    process_one_work+0x3ca/0xaa0
>    worker_thread+0x4e2/0x6c0
>    kthread+0x18d/0x1e0
>    ret_from_fork+0x24/0x30
> 
> To fix it, clear the NVME_RDMA_Q_LIVE before free the ctrl->queues[0].
> The queue will be freed, so it certainly is not LIVE any more.
> 
> Signed-off-by: Jianchao Wang <jianchao.w.wang@...cle.com>
> ---
>   drivers/nvme/host/rdma.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index fd965d0..ffbfe82 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -812,6 +812,11 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
>   	if (new)
>   		nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset);
>   out_free_queue:
> +	/*
> +	 * The queue will be freed, so it is not LIVE any more.
> +	 * This could avoid use-after-free in nvme_rdma_stop_queue.
> +	 */
> +	clear_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags);
>   	nvme_rdma_free_queue(&ctrl->queues[0]);
>   	return error;
>   }
> 

The correct fix would be to add a tag for stop_queue and call
nvme_rdma_stop_queue() in all the failure cases after
nvme_rdma_start_queue.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ