lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 23 Jul 2010 14:37:20 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Heinz Diehl <htd@...cy-poultry.org>
Cc:	linux-kernel@...r.kernel.org, jaxboe@...ionio.com,
	nauman@...gle.com, dpshah@...gle.com, guijianfeng@...fujitsu.com,
	jmoyer@...hat.com, czoccolo@...il.com
Subject: Re: [RFC PATCH] cfq-iosched: IOPS mode for group scheduling and new
 group_idle tunable

On Fri, Jul 23, 2010 at 04:56:31PM +0200, Heinz Diehl wrote:
> On 23.07.2010, Vivek Goyal wrote: 
> 
> > Thanks for some testing Heinz. I am assuming you are not using cgroups
> > and blkio controller.
> 
> Not at all.
> 
> > In that case, you are seeing improvements probably due to first patch
> > where we don't idle on service tree if slice_idle=0. Hence we cut down on
> > overall idling and can see throughput incrase.
> 
> Hmm, in any case it's not getting worse by setting slice_idle to 8. 
> 
> My main motivation to test your patches was that I thought 
> the other way 'round, and was just curious on how this patchset 
> will affect machines which are NOT a high end server/storage system :-) 
> 
> > What kind of configuration these 3 disks are on your system? Some Hardare
> > RAID or software RAID ?
> 
> Just 3 SATA disks plugged into the onboard controller, no RAID or whatsoever.
> 
> I used fs_mark for testing:
> "fs_mark  -S  1  -D  10000  -N  100000  -d  /home/htd/fsmark/test  -s 65536  -t  1  -w  4096  -F"
> 
> These are the results with plain cfq (2.6.35-rc6) and the settings which
> gave the best speed/throughput on my machine:
> 
> low_latency = 0
> slice_idle = 4
> quantum = 32
> 
> Setting slice_idle to 0 didn't improve anything, I tried this before.
> 
> FSUse%        Count         Size    Files/sec     App Overhead
>     27         1000        65536        360.3            34133
>     27         2000        65536        384.4            34657
>     27         3000        65536        401.1            32994
>     27         4000        65536        394.3            33781
>     27         5000        65536        406.8            32569
>     27         6000        65536        401.9            34001
>     27         7000        65536        374.5            33192
>     27         8000        65536        398.3            32839
>     27         9000        65536        405.2            34110
>     27        10000        65536        398.9            33887
>     27        11000        65536        402.3            34111
>     27        12000        65536        398.1            33652
>     27        13000        65536        412.9            32443
>     27        14000        65536        408.1            32197
> 
> 
> And this is after applying your patchset, with your settings
> (and slice_idle = 0):
> 
> FSUse%        Count         Size    Files/sec     App Overhead
>     27         1000        65536        600.7            29579
>     27         2000        65536        568.4            30650
>     27         3000        65536        522.0            29171
>     27         4000        65536        534.1            29751
>     27         5000        65536        550.7            30168
>     27         6000        65536        521.7            30158
>     27         7000        65536        493.3            29211
>     27         8000        65536        495.3            30183
>     27         9000        65536        587.8            29881
>     27        10000        65536        469.9            29602
>     27        11000        65536        482.7            29557
>     27        12000        65536        486.6            30700
>     27        13000        65536        516.1            30243
> 

I think that above improvement is due to first patch and changes in
cfq_should_idle(). cfq_should_idle() used to return 1 even if slice_idle=0
and that created bottlenecks at some places like in select_queue() we
will not expire a queue till request from that queue completed. This
stopped a new queue from dispatching requests etc...

Anyway, for fs_mark problem, can you give following patch a try.

https://patchwork.kernel.org/patch/113061/

Above patch should improve your fs_mark numbers even without setting
slice_idle=0.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ