[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200908152249.710294557@linuxfoundation.org>
Date: Tue, 8 Sep 2020 17:25:08 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Ben Marzinski <bmarzins@...hat.com>,
Mike Snitzer <snitzer@...hat.com>
Subject: [PATCH 5.8 166/186] dm mpath: fix racey management of PG initialization
From: Mike Snitzer <snitzer@...hat.com>
commit c322ee9320eaa4013ca3620b1130992916b19b31 upstream.
Commit 935fcc56abc3 ("dm mpath: only flush workqueue when needed")
changed flush_multipath_work() to avoid needless workqueue
flushing (of a multipath global workqueue). But that change didn't
realize the surrounding flush_multipath_work() code should also only
run if 'pg_init_in_progress' is set.
Fix this by only doing all of flush_multipath_work()'s PG init related
work if 'pg_init_in_progress' is set.
Otherwise multipath_wait_for_pg_init_completion() will run
unconditionally but the preceeding flush_workqueue(kmpath_handlerd)
may not. This could lead to deadlock (though only if kmpath_handlerd
never runs a corresponding work to decrement 'pg_init_in_progress').
It could also be, though highly unlikely, that the kmpath_handlerd
work that does PG init completes before 'pg_init_in_progress' is set,
and then an intervening DM table reload's multipath_postsuspend()
triggers flush_multipath_work().
Fixes: 935fcc56abc3 ("dm mpath: only flush workqueue when needed")
Cc: stable@...r.kernel.org
Reported-by: Ben Marzinski <bmarzins@...hat.com>
Signed-off-by: Mike Snitzer <snitzer@...hat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/md/dm-mpath.c | 22 +++++++++++++++-------
1 file changed, 15 insertions(+), 7 deletions(-)
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -1247,17 +1247,25 @@ static void multipath_wait_for_pg_init_c
static void flush_multipath_work(struct multipath *m)
{
if (m->hw_handler_name) {
- set_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
- smp_mb__after_atomic();
+ unsigned long flags;
+
+ if (!atomic_read(&m->pg_init_in_progress))
+ goto skip;
+
+ spin_lock_irqsave(&m->lock, flags);
+ if (atomic_read(&m->pg_init_in_progress) &&
+ !test_and_set_bit(MPATHF_PG_INIT_DISABLED, &m->flags)) {
+ spin_unlock_irqrestore(&m->lock, flags);
- if (atomic_read(&m->pg_init_in_progress))
flush_workqueue(kmpath_handlerd);
- multipath_wait_for_pg_init_completion(m);
+ multipath_wait_for_pg_init_completion(m);
- clear_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
- smp_mb__after_atomic();
+ spin_lock_irqsave(&m->lock, flags);
+ clear_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
+ }
+ spin_unlock_irqrestore(&m->lock, flags);
}
-
+skip:
if (m->queue_mode == DM_TYPE_BIO_BASED)
flush_work(&m->process_queued_bios);
flush_work(&m->trigger_event);
Powered by blists - more mailing lists