[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220720093048.225944-3-wangyoua@uniontech.com>
Date: Wed, 20 Jul 2022 17:30:48 +0800
From: Wang You <wangyoua@...ontech.com>
To: axboe@...nel.dk
Cc: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
hch@....de, jaegeuk@...nel.org, fio@...r.kernel.org,
ming.lei@...hat.com, wangyoua@...ontech.com,
wangxiaohua@...ontech.com
Subject: [PATCH 2/2] block/mq-deadline: Prioritize first request
The function deadline_head_request can select the request located at
the head from the sector red-black tree of the mq-deadline scheduler,
dispatch such a request may cause the disk access address to return
to the head, so as to prevent it from swinging back and forth.
- The presence of the scheduler batching requests may reduce or
even eliminate its ability to fuse and sort, so I sometimes set
it to 1.
- This pathc may exacerbate the risk of expire, I don't know if
a more absolute expire detection is necessary.
- Tested some disks (mainly rotational disks and some SSDs) with
the fio tool (using sync, direct, etc. parameters), the results
show that they increase the disk's small sector sequential read
and write performance, does this imply that changing
nr_sched_batch is reasonable?
The test hardware is:
Kunpeng-920, HW-SAS3508+(MG04ACA400N * 2), RAID0.
The test command is:
fio -ioengine=psync -lockmem=1G -buffered=0 -time_based=1 -direct=1
-iodepth=1 -thread -bs=512B -size=110g -numjobs=16 -runtime=300
-group_reporting -name=read -filename=/dev/sdb14
-ioscheduler=mq-deadline -rw=read[,write,rw]
The following is the test data:
origin/master:
read iops: 152421 write iops: 136959 rw iops: 54593,54581
nr_sched_batch = 1:
read iops: 166449 write iops: 139477 rw iops: 55363,55355
nr_sched_batch = 1, use deadline_head_request:
read iops: 171177 write iops: 184431 rw iops: 56178,56169
Signed-off-by: Wang You <wangyoua@...ontech.com>
---
block/mq-deadline.c | 42 +++++++++++++++++++++++++++++++++++++++---
1 file changed, 39 insertions(+), 3 deletions(-)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 1a9e835e816c..e155f49d7a70 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -344,6 +344,35 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
return rq;
}
+static inline struct request *
+deadline_head_request(struct deadline_data *dd, struct dd_per_prio *per_prio, int data_dir)
+{
+ struct rb_node *node = rb_first(&per_prio->sort_list[data_dir]);
+ struct request *rq;
+ unsigned long flags;
+
+ if (!node)
+ return NULL;
+
+ rq = rb_entry_rq(node);
+ if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q))
+ return rq;
+
+ /*
+ * Look for a write request that can be dispatched, that is one with
+ * an unlocked target zone.
+ */
+ spin_lock_irqsave(&dd->zone_lock, flags);
+ while (rq) {
+ if (blk_req_can_dispatch_to_zone(rq))
+ break;
+ rq = deadline_latter_request(rq);
+ }
+ spin_unlock_irqrestore(&dd->zone_lock, flags);
+
+ return rq;
+}
+
/*
* Returns true if and only if @rq started after @latest_start where
* @latest_start is in jiffies.
@@ -429,13 +458,20 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
* we are not running a batch, find best request for selected data_dir
*/
next_rq = deadline_next_request(dd, per_prio, data_dir);
- if (deadline_check_fifo(per_prio, data_dir) || !next_rq) {
+ if (deadline_check_fifo(per_prio, data_dir)) {
/*
* A deadline has expired, the last request was in the other
- * direction, or we have run out of higher-sectored requests.
- * Start again from the request with the earliest expiry time.
+ * direction. Start again from the request with the earliest
+ * expiry time.
*/
rq = deadline_fifo_request(dd, per_prio, data_dir);
+ } else if (!next_rq) {
+ /*
+ * There is no operation expired, and we have run out of
+ * higher-sectored requests. Look for the sector at the head
+ * which may reduce disk seek consumption.
+ */
+ rq = deadline_head_request(dd, per_prio, data_dir);
} else {
/*
* The last req was the same dir and we have a next request in
--
2.27.0
Powered by blists - more mailing lists