lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100722205447.GB2688@redhat.com>
Date:	Thu, 22 Jul 2010 16:54:47 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	linux-kernel@...r.kernel.org, axboe@...nel.dk, nauman@...gle.com,
	dpshah@...gle.com, guijianfeng@...fujitsu.com, jmoyer@...hat.com,
	czoccolo@...il.com
Subject: Re: [RFC PATCH] cfq-iosced: Implement IOPS mode and group_idle
 tunable V3

On Thu, Jul 22, 2010 at 01:56:02AM -0400, Christoph Hellwig wrote:
> On Wed, Jul 21, 2010 at 03:06:18PM -0400, Vivek Goyal wrote:
> > On high end storage (I got on HP EVA storage array with 12 SATA disks in 
> > RAID 5),
> 
> That's actually quite low end storage for a server these days :)
> 
> > So this is not the default mode. This new tunable group_idle, allows one to
> > set slice_idle=0 to disable some of the CFQ features and and use primarily
> > group service differentation feature.
> 
> While this is better than before needing a sysfs tweak to get any
> performance out of any kind of server class hardware still is pretty
> horrible.  And slice_idle=0 is not exactly the most obvious paramter
> I would look for either.    So having some way to automatically disable
> this mode based on hardware characteristics would be really useful,
> and if that's not possible at least make sure it's very obviously
> document and easily found using web searches.
> 
> Btw, what effect does slice_idle=0 with your patches have to single SATA
> disk and single SSD setups?

Well after responding to your mail in the morning, I realized that it was
a twisted answer and not very clear.

That forced me to change the patch a bit. With new patches (yet to be
posted), answer to your question is that nothing will change for SATA
or SSD setup with slice_idle=0 with my patches..

Why? CFQ is using two different algorithms for cfq queue and cfq group
scheduling. This IOPS mode will only affect group scheduling and not
the cfqq scheduling.

So switching to IOPS mode should not change anything for non cgroup users on
all kind of storage. It will impact only group scheduling users who will start
seeing fairness among groups in terms of IOPS and not time. Of course
slice_idle needs to be set to 0 only on high end storage so that we get
fairness among groups in IOPS at the same time achieve full potential of
storage box.

Thanks
Vivek 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ