lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090410072600.GP5178@kernel.dk>
Date:	Fri, 10 Apr 2009 09:26:01 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Cc:	LKML <linux-kernel@...r.kernel.org>
Subject: Re: tiobench read 50% regression with 2.6.30-rc1

On Fri, Apr 10 2009, Zhang, Yanmin wrote:
> On Thu, 2009-04-09 at 11:57 +0200, Jens Axboe wrote:
> > On Thu, Apr 09 2009, Zhang, Yanmin wrote:
> > > Comparing with 2.6.29's result, tiobench (read) has about 50% regression
> > > with 2.6.30-rc1 on all my machines. Bisect down to below patch.
> > > 
> > > b029195dda0129b427c6e579a3bb3ae752da3a93 is first bad commit
> > > commit b029195dda0129b427c6e579a3bb3ae752da3a93
> > > Author: Jens Axboe <jens.axboe@...cle.com>
> > > Date:   Tue Apr 7 11:38:31 2009 +0200
> > > 
> > >     cfq-iosched: don't let idling interfere with plugging
> > >     
> > >     When CFQ is waiting for a new request from a process, currently it'll
> > >     immediately restart queuing when it sees such a request. This doesn't
> > >     work very well with streamed IO, since we then end up splitting IO
> > >     that would otherwise have been merged nicely. For a simple dd test,
> > >     this causes 10x as many requests to be issued as we should have.
> > >     Normally this goes unnoticed due to the low overhead of requests
> > >     at the device side, but some hardware is very sensitive to request
> > >     sizes and there it can cause big slow downs.
> > > 
> > > 
> > > 
> > > Command to start the testing:
> > > #tiotest -k0 -k1 -k3 -f 80 -t 32
> > > 
> > > It's a multi-threaded program and starts 32 threads. Every thread does I/O
> > > on its own 80MB file.
> The files should be created before the testing and pls. drop page caches
> by "echo 3 >/proc/sys/vm/drop_caches" before testing.
> 
> > 
> > It's not a huge surprise that we regressed there. I'll get this fixed up
> > next week. Can you I talk you into trying to change the 'quantum' sysfs
> > variable for the drive? It's in /sys/block/xxx/queue/iosched where xxx
> > is your drive(s). It's set to 4, if you could try progressively larger
> > settings and retest, that would help get things started.
> I tried 4,8,16,64,128 and didn't find result difference.

OK, that's good. I'm away from the source atm, but I think I know what
is going on. We need to kick the queue for anything but the first queued
request. I'll get it fixed up next week.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ