[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250730082207.4031744-2-yukuai1@huaweicloud.com>
Date: Wed, 30 Jul 2025 16:22:03 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: dlemoal@...nel.org,
hare@...e.de,
jack@...e.cz,
tj@...nel.org,
josef@...icpanda.com,
axboe@...nel.dk,
yukuai3@...wei.com
Cc: cgroups@...r.kernel.org,
linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org,
yukuai1@...weicloud.com,
yi.zhang@...wei.com,
yangerkun@...wei.com,
johnny.chenyi@...wei.com
Subject: [PATCH v2 1/5] blk-mq-sched: introduce high level elevator lock
From: Yu Kuai <yukuai3@...wei.com>
Currently, both mq-deadline and bfq have global spin lock that will be
grabbed inside elevator methods like dispatch_request, insert_requests,
and bio_merge. And the global lock is the main reason mq-deadline and
bfq can't scale very well.
While dispatching request, blk_mq_get_disatpch_budget() and
blk_mq_get_driver_tag() must be called, and they are not ready to be called
inside elevator methods, hence introduce a new method like
dispatch_requests is not possible.
Hence introduce a new high level elevator lock, currently it is protecting
dispatch_request only. Following patches will convert mq-deadline and bfq
to use this lock and finally support request batch dispatching by calling
the method multiple time while holding the lock.
Signed-off-by: Yu Kuai <yukuai3@...wei.com>
---
block/blk-mq-sched.c | 6 ++++++
block/elevator.c | 1 +
block/elevator.h | 4 ++--
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 55a0fd105147..7911fae75ce4 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -98,6 +98,7 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
max_dispatch = hctx->queue->nr_requests;
do {
+ bool sq_sched = blk_queue_sq_sched(q);
struct request *rq;
int budget_token;
@@ -113,7 +114,12 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
if (budget_token < 0)
break;
+ if (sq_sched)
+ spin_lock(&e->lock);
rq = e->type->ops.dispatch_request(hctx);
+ if (sq_sched)
+ spin_unlock(&e->lock);
+
if (!rq) {
blk_mq_put_dispatch_budget(q, budget_token);
/*
diff --git a/block/elevator.c b/block/elevator.c
index 88f8f36bed98..45303af0ca73 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -144,6 +144,7 @@ struct elevator_queue *elevator_alloc(struct request_queue *q,
eq->type = e;
kobject_init(&eq->kobj, &elv_ktype);
mutex_init(&eq->sysfs_lock);
+ spin_lock_init(&eq->lock);
hash_init(eq->hash);
return eq;
diff --git a/block/elevator.h b/block/elevator.h
index a07ce773a38f..cbbac4f7825c 100644
--- a/block/elevator.h
+++ b/block/elevator.h
@@ -110,12 +110,12 @@ struct request *elv_rqhash_find(struct request_queue *q, sector_t offset);
/*
* each queue has an elevator_queue associated with it
*/
-struct elevator_queue
-{
+struct elevator_queue {
struct elevator_type *type;
void *elevator_data;
struct kobject kobj;
struct mutex sysfs_lock;
+ spinlock_t lock;
unsigned long flags;
DECLARE_HASHTABLE(hash, ELV_HASH_BITS);
};
--
2.39.2
Powered by blists - more mailing lists