lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7ab943e0-5ac4-d370-0a15-3108f689e478@suse.de>
Date:   Mon, 3 May 2021 15:33:32 +0200
From:   Hannes Reinecke <hare@...e.de>
To:     Daniel Wagner <dwagner@...e.de>, linux-nvme@...ts.infradead.org
Cc:     linux-kernel@...r.kernel.org, Keith Busch <kbusch@...nel.org>,
        Jens Axboe <axboe@...com>, Christoph Hellwig <hch@....de>
Subject: Re: [PATCH] nvme-multipath: Reset bi_disk to ns head when failover

On 5/3/21 2:57 PM, Daniel Wagner wrote:
> The path can be stale when we failover. If we don't reset the bdev to
> the ns head and the I/O finally completes in end_io() it will triggers
> a crash. By resetting the to ns head disk so that the submit path can
> map the request to an active path.
> 
> Signed-off-by: Daniel Wagner <dwagner@...e.de>
> ---
> 
> The patch is against nvme-5.13.
> 
> [ 6552.155244] Call Trace:
> [ 6552.155251]  bio_endio+0x74/0x120
> [ 6552.155260]  nvme_ns_head_submit_bio+0x36f/0x3e0 [nvme_core]
> [ 6552.155266]  ? __switch_to_asm+0x34/0x70
> [ 6552.155269]  ? __switch_to_asm+0x40/0x70
> [ 6552.155271]  submit_bio_noacct+0x175/0x490
> [ 6552.155274]  ? __switch_to_asm+0x34/0x70
> [ 6552.155277]  ? __switch_to_asm+0x34/0x70
> [ 6552.155284]  ? nvme_requeue_work+0x5a/0x70 [nvme_core]
> [ 6552.155290]  nvme_requeue_work+0x5a/0x70 [nvme_core]
> [ 6552.155296]  process_one_work+0x1f4/0x3e0
> [ 6552.155299]  worker_thread+0x2d/0x3e0
> [ 6552.155302]  ? process_one_work+0x3e0/0x3e0
> [ 6552.155305]  kthread+0x10d/0x130
> [ 6552.155307]  ? kthread_park+0xa0/0xa0
> [ 6552.155311]  ret_from_fork+0x35/0x40
> 
>   drivers/nvme/host/multipath.c | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 0d0de3433f37..0faf267faa58 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -69,7 +69,9 @@ void nvme_failover_req(struct request *req)
>   {
>   	struct nvme_ns *ns = req->q->queuedata;
>   	u16 status = nvme_req(req)->status & 0x7ff;
> +	struct block_device *bdev;
>   	unsigned long flags;
> +	struct bio *bio;
>   
>   	nvme_mpath_clear_current_path(ns);
>   
> @@ -83,9 +85,13 @@ void nvme_failover_req(struct request *req)
>   		queue_work(nvme_wq, &ns->ctrl->ana_work);
>   	}
>   
> +	bdev = bdget_disk(ns->head->disk, 0);
>   	spin_lock_irqsave(&ns->head->requeue_lock, flags);
> +	for (bio = req->bio; bio; bio = bio->bi_next)
> +		bio_set_dev(bio, bdev);
>   	blk_steal_bios(&ns->head->requeue_list, req);
>   	spin_unlock_irqrestore(&ns->head->requeue_lock, flags);
> +	bdput(bdev);
>   
>   	blk_mq_end_request(req, 0);
>   	kblockd_schedule_work(&ns->head->requeue_work);
> 
Maybe a WARN_ON(!bdev) after bdget_disk(), but otherwise:

Reviewed-by: Hannes Reinecke <hare@...e.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@...e.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ