[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250814033522.770575-9-yukuai1@huaweicloud.com>
Date: Thu, 14 Aug 2025 11:35:14 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: axboe@...nel.dk,
yukuai3@...wei.com,
bvanassche@....org,
nilay@...ux.ibm.com,
hare@...e.de,
ming.lei@...hat.com
Cc: linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org,
yukuai1@...weicloud.com,
yi.zhang@...wei.com,
yangerkun@...wei.com,
johnny.chenyi@...wei.com
Subject: [PATCH 08/16] blk-mq: fix blk_mq_tags double free while nr_requests grown
From: Yu Kuai <yukuai3@...wei.com>
In the case user trigger tags grow by queue sysfs attribute nr_requests,
hctx->sched_tags will be freed directly and replaced with a new
allocated tags, see blk_mq_tag_update_depth().
The problem is that hctx->sched_tags is from elevator->et->tags, while
et->tags is still the freed tags, hence later elevator exist will try to
free the tags again, causing kernel panic.
Fix this problem by using new halper blk_mq_alloc_sched_tags() to
allocate a new sched_tags. Meanwhile, there is a longterm problem can be
fixed as well:
If blk_mq_tag_update_depth() succeed for previous hctx, then bitmap depth
is updated, however, if following hctx failed, q->nr_requests is not
updated and the previous hctx->sched_tags endup bigger than q->nr_requests.
Fixes: f5a6604f7a44 ("block: fix lockdep warning caused by lock dependency in elv_iosched_store")
Fixes: e3a2b3f931f5 ("blk-mq: allow changing of queue depth through sysfs")
Signed-off-by: Yu Kuai <yukuai3@...wei.com>
---
block/blk-mq.c | 31 ++++++++++++++++++++-----------
1 file changed, 20 insertions(+), 11 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a7d6a20c1524..f1c11f591c27 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4917,6 +4917,23 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set)
}
EXPORT_SYMBOL(blk_mq_free_tag_set);
+static int blk_mq_sched_grow_tags(struct request_queue *q, unsigned int nr)
+{
+ struct elevator_tags *et =
+ blk_mq_alloc_sched_tags(q->tag_set, q->nr_hw_queues, nr);
+ struct blk_mq_hw_ctx *hctx;
+ unsigned long i;
+
+ if (!et)
+ return -ENOMEM;
+
+ blk_mq_free_sched_tags(q->elevator->et, q->tag_set);
+ queue_for_each_hw_ctx(q, hctx, i)
+ hctx->sched_tags = et->tags[i];
+ q->elevator->et = et;
+ return 0;
+}
+
int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
{
struct blk_mq_tag_set *set = q->tag_set;
@@ -4940,17 +4957,9 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
sbitmap_queue_resize(&hctx->sched_tags->bitmap_tags,
nr - hctx->sched_tags->nr_reserved_tags);
} else {
- queue_for_each_hw_ctx(q, hctx, i) {
- /*
- * If we're using an MQ scheduler, just update the
- * scheduler queue depth. This is similar to what the
- * old code would do.
- */
- ret = blk_mq_tag_update_depth(hctx,
- &hctx->sched_tags, nr);
- if (ret)
- goto out;
- }
+ ret = blk_mq_sched_grow_tags(q, nr);
+ if (ret)
+ goto out;
}
q->nr_requests = nr;
--
2.39.2
Powered by blists - more mailing lists