lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 21 Jan 2009 01:17:45 -0500
From:	Ben Gamari <bgamari@...il.com>
To:	Mathieu Desnoyers <compudj@...stal.dyndns.org>
Cc:	Jens Axboe <jens.axboe@...cle.com>, akpm@...ux-foundation.org,
	ltt-dev@...ts.casi.polymtl.ca,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org
Subject: Re: [ltt-dev] [RFC PATCH] block: Fix bio merge induced high I/O
 latency

On Tue, 2009-01-20 at 23:54 -0500, Mathieu Desnoyers wrote:
> * Ben Gamari (bgamari@...il.com) wrote:
> > On Tue, Jan 20, 2009 at 7:25 PM, Mathieu Desnoyers
> > <mathieu.desnoyers@...ymtl.ca> wrote:
> > > * Mathieu Desnoyers (mathieu.desnoyers@...ymtl.ca) wrote:
> > >
> > > As a side-note : I'd like to have my results confirmed by others.
> > 
> > Well, I think the (fixed) patch did help to some degree (I haven't
> > done fio benchmarks to compare against yet). Unfortunately, the I/O
> > wait time problem still remains. I have been waiting 3 minutes now for
> > evolution to start with 88% I/O wait time yet no visible signs of
> > progress. I've confirmed I'm using the CFQ scheduler, so that's not
> > the problem.
> > 
> 
> Did you also 
> 
> echo 1 > /sys/block/sd{a,b}/device/queue_depth
I have been using this in some of my measurements (this is recorded, of
course).

> echo 1 > /sys/block/sd{a,b}/queue/iosched/slice_async_rq
> echo 1 > /sys/block/sd{a,b}/queue/iosched/quantum
I haven't been doing this although I will collect a data set with these
parameters set. It would be to compare the effect of this to the default
configuration.

> 
> (replacing sd{a,b} with your actual drives) ?
> 
> It seems to have been part of the factors that helped (along with the
> patch).
> 
> And hopefully you don't have a recent Seagate hard drive like me ? :-)
Thankfully, no.

> 
> So you test case is :
> - start a large dd with 1M block size
> - time evolution
> 
I've been using evolution to get a rough idea of the performance of the
configurations but not as a benchmark per se. I have some pretty
good-sized maildirs, so launching evolution for the first time can be
quite a task, IO-wise. Also, switching between folders used to be quite
time consuming. It seems like the patch did help a bit on this front
though.

For a quantitative benchmark I've been using the fio job that you posted
earlier. I've been collecting results and should have a pretty good data
set soon.

I'll send out a compilation of all the data I've collected as soon as
I've finished.

- Ben


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ