lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49B4A81C.3070609@cn.fujitsu.com>
Date:	Mon, 09 Mar 2009 13:24:44 +0800
From:	Shan Wei <shanwei@...fujitsu.com>
To:	Mike Galbraith <efault@....de>, jens.axboe@...cle.com
CC:	linux-kernel@...r.kernel.org
Subject: Re: CFQ is worse than other IO schedulers in some cases

Mike Galbraith said:
> On Wed, 2009-02-18 at 14:00 +0800, Shan Wei wrote:
> 
>> In sysbench(version:sysbench-0.4.10), I confirmed followings.
>>   - CFQ's performance is worse than other IO schedulers when only multiple
>>     threads test.
>>     (There is no difference under single thread test.)
>>   - It is worse than other IO scheduler when
>>     I used read mode. (No regression in write mode).
>>   - There is no difference among other IO schedulers. (e.g noop deadline)
>>
>>
>> The Test Result(sysbench):
>>    UNIT:Mb/sec
>>     __________________________________________________
>>     |   IO       |      thread  number               |  
>>     | scheduler  |-----------------------------------|
>>     |            |  1   |  3    |  5   |   7  |   9  |
>>     +------------|------|-------|------|------|------|
>>     |cfq         | 77.8 |  32.4 | 43.3 | 55.8 | 58.5 | 
>>     |noop        | 78.2 |  79.0 | 78.2 | 77.2 | 77.0 |
>>     |anticipatory| 78.2 |  78.6 | 78.4 | 77.8 | 78.1 |
>>     |deadline    | 76.9 |  78.4 | 77.0 | 78.4 | 77.9 |
>>     +------------------------------------------------+
> 
> My Q6600 box agrees that cfq produces less throughput doing this test,
> but throughput here is ~flat. Disk is external SATA ST3500820AS.
>     _________________________________________________
>     |   IO       |     thread  number               |  
>     | scheduler  |----------------------------------|
>     |            |  1   |  3   |  5   |  7   |  9   |
>     +------------|------|------|------|------|------|
>     |cfq         | 84.4 | 89.1 | 91.3 | 88.8 | 88.8 |
>     |noop        |102.9 | 99.3 | 99.4 | 99.7 | 98.7 | 
>     |anticipatory|100.5 |100.1 | 99.8 | 99.7 | 99.6 | 
>     |deadline    | 97.9 | 98.7 | 99.5 | 99.5 | 99.3 | 
>     +-----------------------------------------------+
> 

I have tested sysbench tool on the SATA disk under 2.6.29-rc6, 
and don't set RAID.

[root@...id software]# lspci -nn
...snip...
00:02.5 IDE interface [0101]: Silicon Integrated Systems [SiS] 5513 [IDE] [1039:5513] (rev 01)
00:05.0 IDE interface [0101]: Silicon Integrated Systems [SiS] RAID bus controller 180 SATA/PATA  [SiS] [1039:0180] (rev 01)

The attached script(sysbench-threads.sh) execute sysbench 4 times for each I/O scheduler.
And the average result is below:
     ________________________________________
     |   IO       |     thread  number       |  
     | scheduler  |--------------------------|
     |            |  1     |  3     |  5     |  
     +------------|--------|--------|--------|
     |cfq         | 60.324 | 33.982 | 37.309 |
     |noop        | 57.391 | 60.406 | 57.355 | 
     |anticipatory| 58.962 | 59.342 | 56.999 | 
     |deadline    | 57.791 | 60.097 | 57.700 | 
     +---------------------------------------+

I am wondering about the result vs Mike's.
why is the regression under multi-thread not present on Mike's box?

Jens, multi threads interleave the same file, and there may be
some requests that can merge but not merged on different thread queue,
So the CFQ performs poorly, right?

The .config of 2.6.29-rc6 is attached, hope that it's helpful.








Download attachment "sysbench-threads.sh" of type "application/x-sh" (2852 bytes)

View attachment "2.6.29-rc6_config" of type "text/plain" (63555 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ