[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YS+AYA+5/o8Qj08Q@infradead.org>
Date: Wed, 1 Sep 2021 14:30:08 +0100
From: Christoph Hellwig <hch@...radead.org>
To: Hannes Reinecke <hare@...e.de>
Cc: Daniel Wagner <dwagner@...e.de>, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org, Keith Busch <kbusch@...nel.org>
Subject: Re: [PATCH v1] nvme: avoid race in shutdown namespace removal
On Mon, Aug 30, 2021 at 07:14:02PM +0200, Hannes Reinecke wrote:
> On 8/30/21 12:04 PM, Daniel Wagner wrote:
> > On Mon, Aug 30, 2021 at 11:36:18AM +0200, Daniel Wagner wrote:
> > > Though one thing I am not really sure how it interacts with
> > > nvme_init_ns_head() as we could be in running nvme_init_ns_head()
> > > after we have set last_path = true. I haven't really figured
> > > out yet what this would mean. Is this a real problem?
> >
> > I suspect it will regress the very thing 5396fdac56d8 ("nvme: fix
> > refcounting imbalance when all paths are down") tried to fix.
> >
> Most likely. Do drop me a mail on how to create a reproducer for that; it's
> not exactly trivial as you need to patch qemu for that
> (and, of course, those patches will not go upstream as they again hit a
> section which the maintainer deemed to be reworked any time now. So of
> course he can't possibly apply them.)
> (I seem to have a particular spell of bad luck, seeing that it's the _third_
> time this happened to me :-( )
Soo. What is the problem in simply checking in nvme_find_ns_head that
h->list is non-empty? E.g. this variant of the patch from Daniel:
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index d535b00d65816..ce91655fa29bb 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3523,7 +3523,9 @@ static struct nvme_ns_head *nvme_find_ns_head(struct nvme_subsystem *subsys,
lockdep_assert_held(&subsys->lock);
list_for_each_entry(h, &subsys->nsheads, entry) {
- if (h->ns_id == nsid && nvme_tryget_ns_head(h))
+ if (h->ns_id != nsid)
+ continue;
+ if (!list_empty(&h->list) && nvme_tryget_ns_head(h))
return h;
}
@@ -3835,7 +3837,11 @@ static void nvme_ns_remove(struct nvme_ns *ns)
mutex_lock(&ns->ctrl->subsys->lock);
list_del_rcu(&ns->siblings);
- mutex_unlock(&ns->ctrl->subsys->lock);
+ if (list_empty(&ns->head->list)) {
+ list_del_init(&ns->head->entry);
+ last_path = true;
+ }
+ mutex_unlock(&ns->head->subsys->lock);
/* guarantee not available in head->list */
synchronize_rcu();
@@ -3855,13 +3861,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
list_del_init(&ns->list);
up_write(&ns->ctrl->namespaces_rwsem);
- /* Synchronize with nvme_init_ns_head() */
- mutex_lock(&ns->head->subsys->lock);
- if (list_empty(&ns->head->list)) {
- list_del_init(&ns->head->entry);
- last_path = true;
- }
- mutex_unlock(&ns->head->subsys->lock);
if (last_path)
nvme_mpath_shutdown_disk(ns->head);
nvme_put_ns(ns);
Powered by blists - more mailing lists