[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <p2t4e5e476b1004220059u2263832atf36ee33ae83463fa@mail.gmail.com>
Date: Thu, 22 Apr 2010 09:59:14 +0200
From: Corrado Zoccolo <czoccolo@...il.com>
To: Miklos Szeredi <mszeredi@...e.cz>
Cc: Jens Axboe <jens.axboe@...cle.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Jan Kara <jack@...e.cz>, Suresh Jayaraman <sjayaraman@...e.de>
Subject: Re: CFQ read performance regression
Hi Miklos,
On Wed, Apr 21, 2010 at 6:05 PM, Miklos Szeredi <mszeredi@...e.cz> wrote:
> Jens, Corrado,
>
> Here's a graph showing the number of issued but not yet completed
> requests versus time for CFQ and NOOP schedulers running the tiobench
> benchmark with 8 threads:
>
> http://www.kernel.org/pub/linux/kernel/people/mszeredi/blktrace/queue-depth.jpg
>
> It shows pretty clearly the performance problem is because CFQ is not
> issuing enough request to fill the bandwidth.
>
> Is this the correct behavior of CFQ or is this a bug?
This is the expected behavior from CFQ, even if it is not optimal,
since we aren't able to identify multi-splindle disks yet. Can you
post the result of "grep -r . ." in your /sys/block/*/queue directory,
to see if we can find some parameter that can help identifying your
hardware as a multi-spindle disk.
>
> This is on a vanilla 2.6.34-rc4 kernel with two tunables modified:
>
> read_ahead_kb=512
> low_latency=0 (for CFQ)
You should get much better throughput by setting
/sys/block/_your_disk_/queue/iosched/slice_idle to 0, or
/sys/block/_your_disk_/queue/rotational to 0.
Thanks,
Corrado
>
> Thanks,
> Miklos
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists