[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210913131119.147624433@linuxfoundation.org>
Date: Mon, 13 Sep 2021 15:15:51 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Xiao Ni <xni@...hat.com>,
Guoqing Jiang <guoqing.jiang@...ux.dev>,
Song Liu <songliubraving@...com>
Subject: [PATCH 5.13 290/300] md/raid10: Remove unnecessary rcu_dereference in raid10_handle_discard
From: Xiao Ni <xni@...hat.com>
commit 46d4703b1db4c86ab5acb2331b10df999f005e8e upstream.
We are seeing the following warning in raid10_handle_discard.
[ 695.110751] =============================
[ 695.131439] WARNING: suspicious RCU usage
[ 695.151389] 4.18.0-319.el8.x86_64+debug #1 Not tainted
[ 695.174413] -----------------------------
[ 695.192603] drivers/md/raid10.c:1776 suspicious
rcu_dereference_check() usage!
[ 695.225107] other info that might help us debug this:
[ 695.260940] rcu_scheduler_active = 2, debug_locks = 1
[ 695.290157] no locks held by mkfs.xfs/10186.
In the first loop of function raid10_handle_discard. It already
determines which disk need to handle discard request and add the
rdev reference count rdev->nr_pending. So the conf->mirrors will
not change until all bios come back from underlayer disks. It
doesn't need to use rcu_dereference to get rdev.
Cc: stable@...r.kernel.org
Fixes: d30588b2731f ('md/raid10: improve raid10 discard request')
Signed-off-by: Xiao Ni <xni@...hat.com>
Acked-by: Guoqing Jiang <guoqing.jiang@...ux.dev>
Signed-off-by: Song Liu <songliubraving@...com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/md/raid10.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1706,6 +1706,11 @@ retry_discard:
} else
r10_bio->master_bio = (struct bio *)first_r10bio;
+ /*
+ * first select target devices under rcu_lock and
+ * inc refcount on their rdev. Record them by setting
+ * bios[x] to bio
+ */
rcu_read_lock();
for (disk = 0; disk < geo->raid_disks; disk++) {
struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
@@ -1737,9 +1742,6 @@ retry_discard:
for (disk = 0; disk < geo->raid_disks; disk++) {
sector_t dev_start, dev_end;
struct bio *mbio, *rbio = NULL;
- struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
- struct md_rdev *rrdev = rcu_dereference(
- conf->mirrors[disk].replacement);
/*
* Now start to calculate the start and end address for each disk.
@@ -1769,9 +1771,12 @@ retry_discard:
/*
* It only handles discard bio which size is >= stripe size, so
- * dev_end > dev_start all the time
+ * dev_end > dev_start all the time.
+ * It doesn't need to use rcu lock to get rdev here. We already
+ * add rdev->nr_pending in the first loop.
*/
if (r10_bio->devs[disk].bio) {
+ struct md_rdev *rdev = conf->mirrors[disk].rdev;
mbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set);
mbio->bi_end_io = raid10_end_discard_request;
mbio->bi_private = r10_bio;
@@ -1784,6 +1789,7 @@ retry_discard:
bio_endio(mbio);
}
if (r10_bio->devs[disk].repl_bio) {
+ struct md_rdev *rrdev = conf->mirrors[disk].replacement;
rbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set);
rbio->bi_end_io = raid10_end_discard_request;
rbio->bi_private = r10_bio;
Powered by blists - more mailing lists