[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240301095657.662111-10-yukuai1@huaweicloud.com>
Date: Fri, 1 Mar 2024 17:56:57 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: zkabelac@...hat.com,
xni@...hat.com,
agk@...hat.com,
snitzer@...nel.org,
mpatocka@...hat.com,
dm-devel@...ts.linux.dev,
song@...nel.org,
yukuai3@...wei.com,
heinzm@...hat.com,
neilb@...e.de,
jbrassow@...hat.com
Cc: linux-kernel@...r.kernel.org,
linux-raid@...r.kernel.org,
yukuai1@...weicloud.com,
yi.zhang@...wei.com,
yangerkun@...wei.com
Subject: [PATCH -next 9/9] dm-raid: fix lockdep waring in "pers->hot_add_disk"
From: Yu Kuai <yukuai3@...wei.com>
The lockdep assert is added by commit a448af25becf ("md/raid10: remove
rcu protection to access rdev from conf") in print_conf(). And I didn't
notice that dm-raid is calling "pers->hot_add_disk" without holding
'reconfig_mutex'.
"pers->hot_add_disk" read and write many fields that is protected by
'reconfig_mutex', and raid_resume() already grab the lock in other
contex. Hence fix this problem by protecting "pers->host_add_disk"
with the lock.
Fixes: 9092c02d9435 ("DM RAID: Add ability to restore transiently failed devices on resume")
Fixes: a448af25becf ("md/raid10: remove rcu protection to access rdev from conf")
Signed-off-by: Yu Kuai <yukuai3@...wei.com>
---
drivers/md/dm-raid.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 64d381123ce3..97ad4a8582c4 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -4091,7 +4091,9 @@ static void raid_resume(struct dm_target *ti)
* Take this opportunity to check whether any failed
* devices are reachable again.
*/
+ mddev_lock_nointr(mddev);
attempt_restore_of_faulty_devices(rs);
+ mddev_unlock(mddev);
}
if (test_and_clear_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags)) {
--
2.39.2
Powered by blists - more mailing lists