lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <93e8d113-55bb-e859-bf3d-54433dd23683@grimberg.me>
Date:   Mon, 23 Aug 2021 10:16:23 -0700
From:   Sagi Grimberg <sagi@...mberg.me>
To:     Daniel Wagner <dwagner@...e.de>, linux-nvme@...ts.infradead.org
Cc:     linux-kernel@...r.kernel.org, Hannes Reinecke <hare@...e.de>
Subject: Re: [PATCH v3] nvme: revalidate paths during rescan



On 8/11/21 8:28 AM, Daniel Wagner wrote:
> From: Hannes Reinecke <hare@...e.de>
> 
> When triggering a rescan due to a namespace resize we will be
> receiving AENs on every controller, triggering a rescan of all
> attached namespaces. If multipath is active only the current path and
> the ns_head disk will be updated, the other paths will still refer to
> the old size until AENs for the remaining controllers are received.
> 
> If I/O comes in before that it might be routed to one of the old
> paths, triggering an I/O failure with 'access beyond end of device'.
> With this patch the old paths are skipped from multipath path
> selection until the controller serving these paths has been rescanned.
> 
> Signed-off-by: Hannes Reinecke <hare@...e.de>
> [dwagner: - introduce NVME_NS_READY flag instead of NVME_NS_INVALIDATE
>            - use 'revalidate' instead of 'invalidate' which
> 	    follows the zoned device code path.]
> Tested-by: Daniel Wagner <dwagner@...e.de>
> Signed-off-by: Daniel Wagner <dwagner@...e.de>
> ---
> v3:
>    - Renamed nvme_mpath_invalidated_paths to nvme_mpath_revalidate_paths()
>    - Replaced NVME_NS_INVALIDATE with NVME_NS_READY
> v2:
>    - https://lore.kernel.org/linux-nvme/20210730071059.124347-1-dwagner@suse.de/
>    - removed churn from failed rebase.
> v1:
>    - https://lore.kernel.org/linux-nvme/20210729194630.i5mhvvgb73duojqq@beryllium.lan/
> 
> drivers/nvme/host/core.c      |  3 +++
>   drivers/nvme/host/multipath.c | 17 ++++++++++++++++-
>   drivers/nvme/host/nvme.h      |  5 +++++
>   3 files changed, 24 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 2f0cbaba12ac..54aafde4f556 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1878,6 +1878,7 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
>   			goto out_unfreeze;
>   	}
>   
> +	set_bit(NVME_NS_READY, &ns->flags);
>   	blk_mq_unfreeze_queue(ns->disk->queue);
>   
>   	if (blk_queue_is_zoned(ns->queue)) {
> @@ -1889,6 +1890,7 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
>   	if (nvme_ns_head_multipath(ns->head)) {
>   		blk_mq_freeze_queue(ns->head->disk->queue);
>   		nvme_update_disk_info(ns->head->disk, ns, id);
> +		nvme_mpath_revalidate_paths(ns);
>   		blk_stack_limits(&ns->head->disk->queue->limits,
>   				 &ns->queue->limits, 0);
>   		blk_queue_update_readahead(ns->head->disk->queue);
> @@ -3816,6 +3818,7 @@ static void nvme_ns_remove(struct nvme_ns *ns)
>   	if (test_and_set_bit(NVME_NS_REMOVING, &ns->flags))
>   		return;
>   
> +	clear_bit(NVME_NS_READY, &ns->flags);
>   	set_capacity(ns->disk, 0);
>   	nvme_fault_inject_fini(&ns->fault_inject);
>   
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 3f32c5e86bfc..d390f14b8bb6 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -147,6 +147,21 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
>   	mutex_unlock(&ctrl->scan_lock);
>   }
>   
> +void nvme_mpath_revalidate_paths(struct nvme_ns *ns)
> +{
> +	struct nvme_ns_head *head = ns->head;
> +	sector_t capacity = get_capacity(head->disk);
> +	int node;
> +
> +	for_each_node(node)
> +		rcu_assign_pointer(head->current_path[node], NULL);
> +
> +	list_for_each_entry_rcu(ns, &head->list, siblings) {
> +		if (capacity != get_capacity(ns->disk))
> +			clear_bit(NVME_NS_READY, &ns->flags);
> +	}

Shouldn't the null setting to current_path come after
we clear NVME_NS_READY on the ns? Otherwise we may still
submit and current_path will be populated with the ns
again...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ