lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9002cafb-4517-43be-9949-e09101a453ba@kernel.org>
Date: Mon, 16 Jun 2025 12:07:47 +0900
From: Damien Le Moal <dlemoal@...nel.org>
To: Yu Kuai <yukuai1@...weicloud.com>, ming.lei@...hat.com,
 yukuai3@...wei.com, tj@...nel.org, josef@...icpanda.com, axboe@...nel.dk
Cc: linux-block@...r.kernel.org, cgroups@...r.kernel.org,
 linux-kernel@...r.kernel.org, yi.zhang@...wei.com, yangerkun@...wei.com,
 johnny.chenyi@...wei.com
Subject: Re: [PATCH RFC v2 5/5] blk-mq-sched: support request batch
 dispatching for sq elevator

On 6/14/25 18:25, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@...wei.com>
> 
> Before this patch, each context will hold a global lock to dispatch one
> request at a time, which introduce intense lock competition:

How so ? If there is only a single context issuing IOs, there will not be any
contention on the lock.

> lock
> ops.dispatch_request
> unlock
> 
> Hence support dispatch a batch of requests while holding the lock to
> reduce lock contention.

Lock contention would happen only if you have multiple processes issuing I/Os.
For a single context case, this simply reduces the overhead of dispatching
commands by avoiding the lock+unlock per request. So please explain that clearly.

> 
> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
> ---
>  block/blk-mq-sched.c | 55 ++++++++++++++++++++++++++++++++++++++++----
>  block/blk-mq.h       | 21 +++++++++++++++++
>  2 files changed, 72 insertions(+), 4 deletions(-)
> 
> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> index 990d0f19594a..d7cb88c8e8c7 100644
> --- a/block/blk-mq-sched.c
> +++ b/block/blk-mq-sched.c
> @@ -101,6 +101,49 @@ static bool elevator_can_dispatch(struct sched_dispatch_ctx *ctx)
>  	return true;
>  }
>  
> +static void elevator_dispatch_requests(struct sched_dispatch_ctx *ctx)
> +{
> +	struct request *rq;
> +	int budget_token[BUDGET_TOKEN_BATCH];
> +	int count;
> +	int i;

These 2 can be declared on the same line.

> +
> +	while (true) {
> +		if (!elevator_can_dispatch(ctx))
> +			return;
> +
> +		count = blk_mq_get_dispatch_budgets(ctx->q, budget_token);
> +		if (count <= 0)
> +			return;
> +
> +		elevator_lock(ctx->e);
> +		for (i = 0; i < count; ++i) {
> +			rq = ctx->e->type->ops.dispatch_request(ctx->hctx);
> +			if (!rq) {
> +				ctx->run_queue = true;
> +				goto err_free_budgets;
> +			}
> +
> +			blk_mq_set_rq_budget_token(rq, budget_token[i]);
> +			list_add_tail(&rq->queuelist, &ctx->rq_list);
> +			ctx->count++;
> +			if (rq->mq_hctx != ctx->hctx)
> +				ctx->multi_hctxs = true;
> +
> +			if (!blk_mq_get_driver_tag(rq)) {
> +				i++;
> +				goto err_free_budgets;
> +			}
> +		}
> +		elevator_unlock(ctx->e);
> +	}
> +
> +err_free_budgets:
> +	elevator_unlock(ctx->e);
> +	for (; i < count; ++i)> +		blk_mq_put_dispatch_budget(ctx->q, budget_token[i]);
> +}


-- 
Damien Le Moal
Western Digital Research

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ