lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200817121052.GC79836@VM20190228-100.tbsite.net>
Date:   Mon, 17 Aug 2020 20:10:52 +0800
From:   Baolin Wang <baolin.wang@...ux.alibaba.com>
To:     Christoph Hellwig <hch@....de>
Cc:     axboe@...nel.dk, ming.lei@...hat.com, baolin.wang7@...il.com,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH RESEND 4/5] block: Remove blk_mq_attempt_merge() function

On Mon, Aug 17, 2020 at 08:31:53AM +0200, Christoph Hellwig wrote:
> On Mon, Aug 17, 2020 at 12:09:18PM +0800, Baolin Wang wrote:
> >  		unsigned int nr_segs)
> >  {
> > @@ -447,7 +425,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
> >  			!list_empty_careful(&ctx->rq_lists[type])) {
> >  		/* default per sw-queue merge */
> >  		spin_lock(&ctx->lock);
> > -		ret = blk_mq_attempt_merge(q, hctx, ctx, bio, nr_segs);
> > +		/*
> > +		 * Reverse check our software queue for entries that we could
> > +		 * potentially merge with. Currently includes a hand-wavy stop
> > +		 * count of 8, to not spend too much time checking for merges.
> > +		 */
> > +		if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs)) {
> > +			ctx->rq_merged++;
> > +			ret = true;
> > +		}
> > +
> >  		spin_unlock(&ctx->lock);
> 
> This adds an overly long line.  That being said the whole thing could
> be nicely simplified to:
> 
> 	...
> 
> 	if (e && e->type->ops.bio_merge)
> 		return e->type->ops.bio_merge(hctx, bio, nr_segs);
> 
> 	if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) ||
> 	    list_empty_careful(&ctx->rq_lists[hctx->type]))
> 		return false;
> 
> 	/*
> 	 * Reverse check our software queue for entries that we could
> 	 * potentially merge with. Currently includes a hand-wavy stop count of
> 	 * 8, to not spend too much time checking for merges.
> 	 */
> 	spin_lock(&ctx->lock);
> 	ret = blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs);
> 	if (ret)
> 		ctx->rq_merged++;
> 	spin_unlock(&ctx->lock);
> 
> Also I think it would make sense to move the locking into
> blk_mq_bio_list_merge.

Sure, will do in next version.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ