[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200817063153.GD12248@lst.de>
Date: Mon, 17 Aug 2020 08:31:53 +0200
From: Christoph Hellwig <hch@....de>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: axboe@...nel.dk, ming.lei@...hat.com, hch@....de,
baolin.wang7@...il.com, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RESEND 4/5] block: Remove blk_mq_attempt_merge()
function
On Mon, Aug 17, 2020 at 12:09:18PM +0800, Baolin Wang wrote:
> unsigned int nr_segs)
> {
> @@ -447,7 +425,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
> !list_empty_careful(&ctx->rq_lists[type])) {
> /* default per sw-queue merge */
> spin_lock(&ctx->lock);
> - ret = blk_mq_attempt_merge(q, hctx, ctx, bio, nr_segs);
> + /*
> + * Reverse check our software queue for entries that we could
> + * potentially merge with. Currently includes a hand-wavy stop
> + * count of 8, to not spend too much time checking for merges.
> + */
> + if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs)) {
> + ctx->rq_merged++;
> + ret = true;
> + }
> +
> spin_unlock(&ctx->lock);
This adds an overly long line. That being said the whole thing could
be nicely simplified to:
...
if (e && e->type->ops.bio_merge)
return e->type->ops.bio_merge(hctx, bio, nr_segs);
if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) ||
list_empty_careful(&ctx->rq_lists[hctx->type]))
return false;
/*
* Reverse check our software queue for entries that we could
* potentially merge with. Currently includes a hand-wavy stop count of
* 8, to not spend too much time checking for merges.
*/
spin_lock(&ctx->lock);
ret = blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs);
if (ret)
ctx->rq_merged++;
spin_unlock(&ctx->lock);
Also I think it would make sense to move the locking into
blk_mq_bio_list_merge.
Powered by blists - more mailing lists