[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250725180334.40187-1-yukuai@kernel.org>
Date: Sat, 26 Jul 2025 02:03:34 +0800
From: Yu Kuai <yukuai@...nel.org>
To: jack@...e.cz,
dlemoal@...nel.org,
axboe@...nel.dk,
linux-block@...r.kernel.org
Cc: linux-kernel@...r.kernel.org,
yukuai3@...wei.com,
yi.zhang@...wei.com,
yangerkun@...wei.com,
johnny.chenyi@...wei.com
Subject: [PATCH v2] blk-ioc: don't hold queue_lock for ioc_lookup_icq()
From: Yu Kuai <yukuai3@...wei.com>
Currently issue io can grab queue_lock three times from bfq_bio_merge(),
bfq_limit_depth() and bfq_prepare_request(), the queue_lock is not
necessary if icq is already created because both queue and ioc can't be
freed before io issuing is done, hence remove the unnecessary queue_lock
and use rcu to protect radix tree lookup.
Noted this is also a prep patch to support request batch dispatching[1].
[1] https://lore.kernel.org/all/20250722072431.610354-1-yukuai1@huaweicloud.com/
Signed-off-by: Yu Kuai <yukuai3@...wei.com>
---
changes from v1:
- modify ioc_lookup_icq() directly to get rid of queue_lock
block/bfq-iosched.c | 18 ++----------------
block/blk-ioc.c | 10 +++-------
2 files changed, 5 insertions(+), 23 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 0cb1e9873aab..f71ec0887733 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -454,17 +454,10 @@ static struct bfq_io_cq *icq_to_bic(struct io_cq *icq)
*/
static struct bfq_io_cq *bfq_bic_lookup(struct request_queue *q)
{
- struct bfq_io_cq *icq;
- unsigned long flags;
-
if (!current->io_context)
return NULL;
- spin_lock_irqsave(&q->queue_lock, flags);
- icq = icq_to_bic(ioc_lookup_icq(q));
- spin_unlock_irqrestore(&q->queue_lock, flags);
-
- return icq;
+ return icq_to_bic(ioc_lookup_icq(q));
}
/*
@@ -2457,15 +2450,8 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
struct bfq_data *bfqd = q->elevator->elevator_data;
- struct request *free = NULL;
- /*
- * bfq_bic_lookup grabs the queue_lock: invoke it now and
- * store its return value for later use, to avoid nesting
- * queue_lock inside the bfqd->lock. We assume that the bic
- * returned by bfq_bic_lookup does not go away before
- * bfqd->lock is taken.
- */
struct bfq_io_cq *bic = bfq_bic_lookup(q);
+ struct request *free = NULL;
bool ret;
spin_lock_irq(&bfqd->lock);
diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index ce82770c72ab..ea9c975aaef7 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -308,19 +308,18 @@ int __copy_io(unsigned long clone_flags, struct task_struct *tsk)
#ifdef CONFIG_BLK_ICQ
/**
- * ioc_lookup_icq - lookup io_cq from ioc
+ * ioc_lookup_icq - lookup io_cq from ioc in io issue path
* @q: the associated request_queue
*
* Look up io_cq associated with @ioc - @q pair from @ioc. Must be called
- * with @q->queue_lock held.
+ * from io issue path, either return NULL if current issue io to @q for the
+ * first time, or return a valid icq.
*/
struct io_cq *ioc_lookup_icq(struct request_queue *q)
{
struct io_context *ioc = current->io_context;
struct io_cq *icq;
- lockdep_assert_held(&q->queue_lock);
-
/*
* icq's are indexed from @ioc using radix tree and hint pointer,
* both of which are protected with RCU. All removals are done
@@ -419,10 +418,7 @@ struct io_cq *ioc_find_get_icq(struct request_queue *q)
task_unlock(current);
} else {
get_io_context(ioc);
-
- spin_lock_irq(&q->queue_lock);
icq = ioc_lookup_icq(q);
- spin_unlock_irq(&q->queue_lock);
}
if (!icq) {
--
2.43.0
Powered by blists - more mailing lists