[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100726143023.GF12449@redhat.com>
Date: Mon, 26 Jul 2010 10:30:23 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Corrado Zoccolo <czoccolo@...il.com>
Cc: Christoph Hellwig <hch@...radead.org>,
linux-kernel@...r.kernel.org, axboe@...nel.dk, nauman@...gle.com,
dpshah@...gle.com, guijianfeng@...fujitsu.com, jmoyer@...hat.com
Subject: Re: [RFC PATCH] cfq-iosced: Implement IOPS mode and group_idle
tunable V3
On Sat, Jul 24, 2010 at 11:07:07AM +0200, Corrado Zoccolo wrote:
> On Sat, Jul 24, 2010 at 10:51 AM, Christoph Hellwig <hch@...radead.org> wrote:
> > To me this sounds like slice_idle=0 is the right default then, as it
> > gives useful behaviour for all systems linux runs on.
> No, it will give bad performance on single disks, possibly worse than
> deadline (deadline at least sorts the requests between different
> queues, while CFQ with slice_idle=0 doesn't even do this for readers).
> Setting slice_idle to 0 should be considered only when a single
> sequential reader cannot saturate the disk bandwidth, and this happens
> only on smart enough hardware with large number of spindles.
I was thinking of writting a user space utility which can launch
increasing number of parallel direct/buffered reads from device and if
device can sustain more than 1 parallel reads with increasing throughput,
then it probably is good indicator that one might be better off with
slice_idle=0.
Will try that today...
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists