[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aRWvoOmo3_JTelPq@fedora>
Date: Thu, 13 Nov 2025 18:14:56 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Xue He <xue01.he@...sung.com>
Cc: axboe@...nel.dk, yukuai@...as.com, akpm@...ux-foundation.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 RESEND] block: plug attempts to batch allocate tags
multiple times
On Thu, Nov 13, 2025 at 08:02:02AM +0000, Xue He wrote:
> This patch aims to enable batch allocation of sufficient tags after
> batch IO submission with plug mechanism, thereby avoiding the need for
> frequent individual requests when the initial allocation is
> insufficient.
>
> ------------------------------------------------------------
> Perf:
> base code: __blk_mq_alloc_requests() 1.31%
> patch: __blk_mq_alloc_requests() 0.7%
> ------------------------------------------------------------
Can you include the workload with perf together?
>
> ---
> changes since v1:
> - Modify multiple batch registrations into a single loop to achieve
> the batch quantity
>
> changes since v2:
> - Modify the call location of remainder handling
> - Refactoring sbitmap cleanup time
>
> changes since v3:
> - Add handle operation in loop
> - Add helper sbitmap_find_bits_in_word
>
> changes since v4:
> - Split blk-mq.c changes from sbitmap
>
> Signed-off-by: hexue <xue01.he@...sung.com>
> ---
> block/blk-mq.c | 39 ++++++++++++++++++++++-----------------
> 1 file changed, 22 insertions(+), 17 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 09f579414161..64cd0a3c7cbf 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -467,26 +467,31 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data)
> unsigned long tag_mask;
> int i, nr = 0;
>
> - tag_mask = blk_mq_get_tags(data, data->nr_tags, &tag_offset);
> - if (unlikely(!tag_mask))
> - return NULL;
> + do {
> + tag_mask = blk_mq_get_tags(data, data->nr_tags, &tag_offset);
> + if (unlikely(!tag_mask)) {
> + if (nr == 0)
> + return NULL;
> + break;
> + }
> + tags = blk_mq_tags_from_data(data);
> + for (i = 0; tag_mask; i++) {
> + if (!(tag_mask & (1UL << i)))
> + continue;
> + tag = tag_offset + i;
> + prefetch(tags->static_rqs[tag]);
> + tag_mask &= ~(1UL << i);
> + rq = blk_mq_rq_ctx_init(data, tags, tag);
> + rq_list_add_head(data->cached_rqs, rq);
> + data->nr_tags--;
> + nr++;
> + }
> + if (!(data->rq_flags & RQF_SCHED_TAGS))
> + blk_mq_add_active_requests(data->hctx, nr);
Here not only less-efficient, but also a over-counting bug, please
move the above two lines after `percpu_ref_get_many`.
Thanks,
Ming
Powered by blists - more mailing lists