[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240222075806.1816400-10-yukuai1@huaweicloud.com>
Date: Thu, 22 Feb 2024 15:58:05 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: paul.e.luse@...ux.intel.com,
song@...nel.org,
neilb@...e.com,
shli@...com
Cc: linux-raid@...r.kernel.org,
linux-kernel@...r.kernel.org,
yukuai3@...wei.com,
yukuai1@...weicloud.com,
yi.zhang@...wei.com,
yangerkun@...wei.com
Subject: [PATCH md-6.9 09/10] md/raid1: factor out the code to manage sequential IO
From: Yu Kuai <yukuai3@...wei.com>
There is no functional change for now, make read_balance() cleaner and
prepare to fix problems and refactor the handler of sequential IO.
Co-developed-by: Paul Luse <paul.e.luse@...ux.intel.com>
Signed-off-by: Paul Luse <paul.e.luse@...ux.intel.com>
Signed-off-by: Yu Kuai <yukuai3@...wei.com>
---
drivers/md/raid1.c | 71 +++++++++++++++++++++++++---------------------
1 file changed, 38 insertions(+), 33 deletions(-)
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 4694e0e71e36..223ef8d06f67 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -705,6 +705,31 @@ static int choose_slow_rdev(struct r1conf *conf, struct r1bio *r1_bio,
return bb_disk;
}
+static bool is_sequential(struct r1conf *conf, int disk, struct r1bio *r1_bio)
+{
+ /* TODO: address issues with this check and concurrency. */
+ return conf->mirrors[disk].next_seq_sect == r1_bio->sector ||
+ conf->mirrors[disk].head_position == r1_bio->sector;
+}
+
+/*
+ * If buffered sequential IO size exceeds optimal iosize, check if there is idle
+ * disk. If yes, choose the idle disk.
+ */
+static bool should_choose_next(struct r1conf *conf, int disk)
+{
+ struct raid1_info *mirror = &conf->mirrors[disk];
+ int opt_iosize;
+
+ if (!test_bit(Nonrot, &mirror->rdev->flags))
+ return false;
+
+ opt_iosize = bdev_io_opt(mirror->rdev->bdev) >> 9;
+ return opt_iosize > 0 && mirror->seq_start != MaxSector &&
+ mirror->next_seq_sect > opt_iosize &&
+ mirror->next_seq_sect - opt_iosize >= mirror->seq_start;
+}
+
/*
* This routine returns the disk from which the requested read should
* be done. There is a per-array 'next expected sequential IO' sector
@@ -767,42 +792,22 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
pending = atomic_read(&rdev->nr_pending);
dist = abs(this_sector - conf->mirrors[disk].head_position);
/* Don't change to another disk for sequential reads */
- if (conf->mirrors[disk].next_seq_sect == this_sector
- || dist == 0) {
- int opt_iosize = bdev_io_opt(rdev->bdev) >> 9;
- struct raid1_info *mirror = &conf->mirrors[disk];
-
- /*
- * If buffered sequential IO size exceeds optimal
- * iosize, check if there is idle disk. If yes, choose
- * the idle disk. read_balance could already choose an
- * idle disk before noticing it's a sequential IO in
- * this disk. This doesn't matter because this disk
- * will idle, next time it will be utilized after the
- * first disk has IO size exceeds optimal iosize. In
- * this way, iosize of the first disk will be optimal
- * iosize at least. iosize of the second disk might be
- * small, but not a big deal since when the second disk
- * starts IO, the first disk is likely still busy.
- */
- if (test_bit(Nonrot, &rdev->flags) && opt_iosize > 0 &&
- mirror->seq_start != MaxSector &&
- mirror->next_seq_sect > opt_iosize &&
- mirror->next_seq_sect - opt_iosize >=
- mirror->seq_start) {
- /*
- * Add 'pending' to avoid choosing this disk if
- * there is other idle disk.
- * Set 'dist' to 0, so that if there is no other
- * idle disk and all disks are rotational, this
- * disk will still be chosen.
- */
- pending++;
- dist = 0;
- } else {
+ if (is_sequential(conf, disk, r1_bio)) {
+ if (!should_choose_next(conf, disk)) {
best_disk = disk;
break;
}
+
+ /*
+ * Add 'pending' to avoid choosing this disk if there is
+ * other idle disk.
+ */
+ pending++;
+ /*
+ * Set 'dist' to 0, so that if there is no other idle
+ * disk, this disk will still be chosen.
+ */
+ dist = 0;
}
if (min_pending > pending) {
--
2.39.2
Powered by blists - more mailing lists