[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241126065228.GA1133@lst.de>
Date: Tue, 26 Nov 2024 07:52:28 +0100
From: Christoph Hellwig <hch@....de>
To: Chris Bainbridge <chris.bainbridge@...il.com>
Cc: hch@....de, LKML <linux-kernel@...r.kernel.org>, axboe@...nel.dk,
bvanassche@....org,
Linux regressions mailing list <regressions@...ts.linux.dev>,
linux-block@...r.kernel.org, semen.protsenko@...aro.org
Subject: Re: [REGRESSION] ioprio performance hangs, bisected
On Mon, Nov 25, 2024 at 05:16:39PM +0000, Chris Bainbridge wrote:
> I did a bit of debugging.
Thanks, this was extremely helpful!
mq-deadlink not only looks at the priority in the submission path,
but also in the completion path, which is rather unexpected. Now
for drivers that consume bios, req->bio will eventually become
NULL before the completion.
Fortunately fixing this is not only easy but also improves the
code in mq-deadline. Can you test the patch below?
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index acdc28756d9d..91b3789f710e 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -685,10 +685,9 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
prio = ioprio_class_to_prio[ioprio_class];
per_prio = &dd->per_prio[prio];
- if (!rq->elv.priv[0]) {
+ if (!rq->elv.priv[0])
per_prio->stats.inserted++;
- rq->elv.priv[0] = (void *)(uintptr_t)1;
- }
+ rq->elv.priv[0] = per_prio;
if (blk_mq_sched_try_insert_merge(q, rq, free))
return;
@@ -753,18 +752,14 @@ static void dd_prepare_request(struct request *rq)
*/
static void dd_finish_request(struct request *rq)
{
- struct request_queue *q = rq->q;
- struct deadline_data *dd = q->elevator->elevator_data;
- const u8 ioprio_class = dd_rq_ioclass(rq);
- const enum dd_prio prio = ioprio_class_to_prio[ioprio_class];
- struct dd_per_prio *per_prio = &dd->per_prio[prio];
+ struct dd_per_prio *per_prio = rq->elv.priv[0];
/*
* The block layer core may call dd_finish_request() without having
* called dd_insert_requests(). Skip requests that bypassed I/O
* scheduling. See also blk_mq_request_bypass_insert().
*/
- if (rq->elv.priv[0])
+ if (per_prio)
atomic_inc(&per_prio->stats.completed);
}
Powered by blists - more mailing lists