[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210806020826.1407257-3-yukuai3@huawei.com>
Date: Fri, 6 Aug 2021 10:08:24 +0800
From: Yu Kuai <yukuai3@...wei.com>
To: <paolo.valente@...aro.org>, <axboe@...nel.dk>
CC: <linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<yukuai3@...wei.com>, <yi.zhang@...wei.com>
Subject: [PATCH v2 2/4] block, bfq: do not idle if only one cgroup is activated
If only one group is activated, there is no need to guarantee the same
share of the throughput of queues in the same group.
If CONFIG_BFQ_GROUP_IOSCHED is enabled, there is no need to check
'varied_queue_weights' and 'multiple_classes_busy':
1) num_groups_with_pending_reqs = 0, idle is not needed
2) num_groups_with_pending_reqs = 1
- if root group have any pending requests, idle is needed
- if root group is idle, idle is not needed
3) num_groups_with_pending_reqs > 1, idle is needed
Test procedure:
run "fio -numjobs=1 -ioengine=psync -bs=4k -direct=1 -rw=randread..."
multiple times in the same cgroup(not root).
Test result: total bandwidth(Mib/s)
| total jobs | before this patch | after this patch |
| ---------- | ----------------- | --------------------- |
| 1 | 33.8 | 33.8 |
| 2 | 33.8 | 65.4 (32.7 each job) |
| 4 | 33.8 | 106.8 (26.7 each job) |
| 8 | 33.8 | 126.4 (15.8 each job) |
By the way, if I test with "fio -numjobs=1/2/4/8 ...", test result is
the same with or without this patch.
Signed-off-by: Yu Kuai <yukuai3@...wei.com>
---
block/bfq-iosched.c | 35 ++++++++++++++++++++++++++---------
1 file changed, 26 insertions(+), 9 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 7c6b412f9a9c..a780205a1be4 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -709,7 +709,9 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
* much easier to maintain the needed state:
* 1) all active queues have the same weight,
* 2) all active queues belong to the same I/O-priority class,
- * 3) there are no active groups.
+ * 3) there are one active group at most(incluing root_group).
+ * If the last condition is false, there is no need to guarantee the,
+ * same share of the throughput of queues in the same group.
* In particular, the last condition is always true if hierarchical
* support or the cgroups interface are not enabled, thus no state
* needs to be maintained in this case.
@@ -717,7 +719,26 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
static bool bfq_asymmetric_scenario(struct bfq_data *bfqd,
struct bfq_queue *bfqq)
{
- bool smallest_weight = bfqq &&
+ bool smallest_weight;
+ bool varied_queue_weights;
+ bool multiple_classes_busy;
+
+#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ if (bfqd->num_groups_with_pending_reqs > 1)
+ return true;
+
+ if (bfqd->num_groups_with_pending_reqs &&
+ bfqd->num_queues_with_pending_reqs_in_root)
+ return true;
+
+ /*
+ * Reach here means only one group(incluing root group) has pending
+ * requests, thus it's safe to return.
+ */
+ return false;
+#endif
+
+ smallest_weight = bfqq &&
bfqq->weight_counter &&
bfqq->weight_counter ==
container_of(
@@ -729,21 +750,17 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd,
* For queue weights to differ, queue_weights_tree must contain
* at least two nodes.
*/
- bool varied_queue_weights = !smallest_weight &&
+ varied_queue_weights = !smallest_weight &&
!RB_EMPTY_ROOT(&bfqd->queue_weights_tree.rb_root) &&
(bfqd->queue_weights_tree.rb_root.rb_node->rb_left ||
bfqd->queue_weights_tree.rb_root.rb_node->rb_right);
- bool multiple_classes_busy =
+ multiple_classes_busy =
(bfqd->busy_queues[0] && bfqd->busy_queues[1]) ||
(bfqd->busy_queues[0] && bfqd->busy_queues[2]) ||
(bfqd->busy_queues[1] && bfqd->busy_queues[2]);
- return varied_queue_weights || multiple_classes_busy
-#ifdef CONFIG_BFQ_GROUP_IOSCHED
- || bfqd->num_groups_with_pending_reqs > 0
-#endif
- ;
+ return varied_queue_weights || multiple_classes_busy;
}
/*
--
2.31.1
Powered by blists - more mailing lists