lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <x494mnxh7x4.fsf@segfault.boston.devel.redhat.com>
Date:	Thu, 30 Apr 2015 12:56:07 -0400
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Shaohua Li <shli@...com>
Cc:	Jens Axboe <axboe@...com>, <linux-kernel@...r.kernel.org>,
	Christoph Hellwig <hch@....de>
Subject: Re: [PATCH] blk-mq: rationalize plug

Shaohua Li <shli@...com> writes:

>> I like the general approach.  I do wonder whether we should hold back a
>> single I/O at all times, or whether we should do something similar to
>> what Mike Snitzer did in the dm-mpath driver, where he keeps track of
>> the end position of the last I/O (without holding the I/O back) to
>> determine whether we're doing sequential I/O.  With your approach, we
>> will be able to get merges from the very start of a sequential I/O
>> stream.  With Mike's approach, you don't get merges until the second and
>> third I/O.  Mike's way you get potentially better latency for random I/O
>> at the cost of some extra fields in a data structure.  Something to
>> think about.
>
> Sorry for the later reply, been busy these days.
>
> Not sure about this. If latency is sensitive, the caller really should
> flush plug immediately.

Yeah, I don't have any benchmark numbers to tip the scales one way or
the other.

>> > @@ -1240,6 +1283,10 @@ static void blk_mq_make_request(struct request_queue *q, struct bio *bio)
>> >  		return;
>> >  	}
>> >  
>> > +	if (use_plug && !blk_queue_nomerges(q) &&
>> > +	    blk_attempt_plug_merge(q, bio, &request_count))
>> > +		return;
>> > +
>> 
>> You've just added merging for flush/fua requets.  Was that intentional?
>> We don't attempt them in the sq case, and we should probably be
>> consistent.
>  
> FUA/FLUSH is in the unmerge bio flag list, so the merge will not happen.
> But I agree checking it is faster.

Actually, after looking at the patched code again, you didn't change
that behaviour at all, so you can ignore this comment.

>> > @@ -1314,7 +1358,7 @@ static void blk_sq_make_request(struct request_queue *q, struct bio *bio)
>> >  	 * If we have multiple hardware queues, just go directly to
>> >  	 * one of those for sync IO.
>> >  	 */
>> > -	use_plug = !is_flush_fua && !is_sync;
>> > +	use_plug = !is_flush_fua;
>> >  
>> >  	blk_queue_bounce(q, &bio);
>> 
>> For this part I'd rather use my patch.  Would you mind rebasing your
>> patch on top of the sq plugging patch I submitted?
>
> I'm ok, your patch looks better. I'll post some plug related patches out
> soon and add your patch in them if you don't mind. If you'd like to post
> by yourself, you can add my review in your patch:
>
> Reviewed-by: Shaohua Li <shli@...com>

Thanks, feel free to go ahead and post it with your series.

>> One final question, sort of related to this patch.  At the end of
>> blk_mq_make_request, we have this:
>> 
>> run_queue:
>> 		blk_mq_run_hw_queue(data.hctx, !is_sync || is_flush_fua);
>> 
>> So, we don't run the queue synchronously for flush/fua.  Is that logic
>> right?  Why don't we want to push that out immediately?
>
> I'm not sure. My theory is delaying flush/fua can increase the chance
> the flush logic merges multiple flush into one flush, please see the
> logic of flush handling. But this is just my thought, might be
> completely wrong.

Sure.  I don't see a need to change the behaviour in this patch(set), I
was just curious to hear the reasoning for it.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ