lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49r5jedq1f.fsf@segfault.boston.devel.redhat.com>
Date:	Thu, 08 Jul 2010 10:08:44 -0400
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	linux kernel mailing list <linux-kernel@...r.kernel.org>,
	Jens Axboe <axboe@...nel.dk>,
	Corrado Zoccolo <czoccolo@...il.com>,
	Nauman Rafique <nauman@...gle.com>,
	Divyesh Shah <dpshah@...gle.com>,
	Gui Jianfeng <guijianfeng@...fujitsu.com>
Subject: Re: [RFC/RFT PATCH] cfq-iosched: Implement cfq group idling

Vivek Goyal <vgoyal@...hat.com> writes:

> On Thu, Jul 08, 2010 at 09:39:45AM -0400, Jeff Moyer wrote:
>> Vivek Goyal <vgoyal@...hat.com> writes:
>> 
>> > Currently we idle on sequential queues and allow dispatch from a single
>> > queue and that can become a bottleneck on higher end storage. For example
>> > on my HP EVA, I can run multiple sequential streams and achieve top BW
>> > of around 350 MB/s. But with CFQ, dispatching from single queue does not
>> > keep the array busy (limits to 150-180 MB/s with 4 or 8 processes).
>> >
>> > One approach to solve this issue is simply use slice_idle = 0. But this
>> > also takes away the any service differentiation between groups.
>> 
>> That also takes away service differentiation between queues.  If you
>> want to maintain that at all, then this is really just pushing the
>> problem to another layer.
>> 
>
> Yes it does take away the io priority with-in group. But I think that's
> the trade-off and that's not default. So those who don't require ioprio
> stuff working with-in group and those who know that they have got
> faster storage will set slice_idle=0. For rest of the SATA users default
> is still slice_idle=8.

[snip]

Sorry, Vivek, I'm actually hijacking your thread.  ;-)  I know what the
alternatives are, what I'm looking for is guidance on what Jens wants to
do with CFQ.  We can discuss the merits of different approaches once we
agree on a set of requirements.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ