[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090409095706.GD5178@kernel.dk>
Date: Thu, 9 Apr 2009 11:57:06 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: Re: tiobench read 50% regression with 2.6.30-rc1
On Thu, Apr 09 2009, Zhang, Yanmin wrote:
> Comparing with 2.6.29's result, tiobench (read) has about 50% regression
> with 2.6.30-rc1 on all my machines. Bisect down to below patch.
>
> b029195dda0129b427c6e579a3bb3ae752da3a93 is first bad commit
> commit b029195dda0129b427c6e579a3bb3ae752da3a93
> Author: Jens Axboe <jens.axboe@...cle.com>
> Date: Tue Apr 7 11:38:31 2009 +0200
>
> cfq-iosched: don't let idling interfere with plugging
>
> When CFQ is waiting for a new request from a process, currently it'll
> immediately restart queuing when it sees such a request. This doesn't
> work very well with streamed IO, since we then end up splitting IO
> that would otherwise have been merged nicely. For a simple dd test,
> this causes 10x as many requests to be issued as we should have.
> Normally this goes unnoticed due to the low overhead of requests
> at the device side, but some hardware is very sensitive to request
> sizes and there it can cause big slow downs.
>
>
>
> Command to start the testing:
> #tiotest -k0 -k1 -k3 -f 80 -t 32
>
> It's a multi-threaded program and starts 32 threads. Every thread does I/O
> on its own 80MB file.
It's not a huge surprise that we regressed there. I'll get this fixed up
next week. Can you I talk you into trying to change the 'quantum' sysfs
variable for the drive? It's in /sys/block/xxx/queue/iosched where xxx
is your drive(s). It's set to 4, if you could try progressively larger
settings and retest, that would help get things started.
Thanks!
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists