[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <499BA413.2010705@cn.fujitsu.com>
Date: Wed, 18 Feb 2009 14:00:51 +0800
From: Shan Wei <shanwei@...fujitsu.com>
To: jens.axboe@...cle.com
CC: linux-kernel@...r.kernel.org
Subject: CFQ is worse than other IO schedulers in some cases
I found that CFQ's performance is worse than other IO scheduer in some cases
I confirmed its phenomenon when I executed dump command and sysbench on 2.6.28.
In dump(version:dump-0.4b41-2.fc6), I confirmed
the speed under CFQ is slower than other IO schedulers.
The Test Result(dump):
UNIT:Mb/sec
_______________________
| IO | |
| scheduler | Speed |
+------------|--------|
|cfq | 24.310 |
|noop | 36.885 |
|anticipatory| 34.956 |
|deadline | 36.758 |
+----------------------
Steps to reproduce(dump):
#dump -0uf /dev/null /dev/sda6
#df -h /dev/sda6
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 19G 10G 7.6G 57% /mnt
In sysbench(version:sysbench-0.4.10), I confirmed followings.
- CFQ's performance is worse than other IO schedulers when only multiple
threads test.
(There is no difference under single thread test.)
- It is worse than other IO scheduler when
I used read mode. (No regression in write mode).
- There is no difference among other IO schedulers. (e.g noop deadline)
The Test Result(sysbench):
UNIT:Mb/sec
__________________________________________________
| IO | thread number |
| scheduler |-----------------------------------|
| | 1 | 3 | 5 | 7 | 9 |
+------------|------|-------|------|------|------|
|cfq | 77.8 | 32.4 | 43.3 | 55.8 | 58.5 |
|noop | 78.2 | 79.0 | 78.2 | 77.2 | 77.0 |
|anticipatory| 78.2 | 78.6 | 78.4 | 77.8 | 78.1 |
|deadline | 76.9 | 78.4 | 77.0 | 78.4 | 77.9 |
+------------------------------------------------+
Steps to reproduce(sysbench):
(1)#echo cfq > /sys/block/sda/queue/scheduler
(2)#sysbench --test=fileio --num-threads=1 --file-total-size=10G --file-test-mode=seqrd prepare
(3)#sysbench --test=fileio --num-threads=1 --file-total-size=10G --file-test-mode=seqrd run
[snip]
Operations performed: 655360 Read, 0 Write, 0 Other = 655360 Total
Read 10Gb Written 0b Total transferred 10Gb (77.835Mb/sec)
4981.44 Requests/sec executed ~~~~~~~~~~~
(4)#sysbench --test=fileio --num-threads=1 --file-total-size=10G --file-test-mode=seqrd cleanup
(5)#sysbench --test=fileio --num-threads=5 --file-total-size=10G --file-test-mode=seqrd prepare
(6)#sysbench --test=fileio --num-threads=5 --file-total-size=10G --file-test-mode=seqrd run
[snip]
Operations performed: 655360 Read, 0 Write, 0 Other = 655360 Total
Read 10Gb Written 0b Total transferred 10Gb (43.396Mb/sec)
2777.35 Requests/sec executed ~~~~~~~~~~~~
(7)#sysbench --test=fileio --num-threads=5 --file-total-size=10G --file-test-mode=seqrd cleanup
when doing step 2 or 5, sysbench creats 128 files, and 80M each one.
when doing step 4 or 7, sysbench deletes the files.
when doing step 3 or 6, thread reads these files continuously and
reads file-block-size(default:16Kbyte) at once, just like :
t_0 t_0 t_0 t_0 t_0 t_0 t_0
^ ^ ^ ^ ^ ^ ^
---|-----|-----|-----|-----|-----|-----|--------
file | 16k | 16k | 16k | 16k | 16k | 16k | 16k | ...
------------------------------------------------
(num-threads=1)
(t_0 stand for the first thread)
t_0 t_1 t_2 t_3 t_4 t_0 t_1
^ ^ ^ ^ ^ ^ ^
---|-----|-----|-----|-----|-----|-----|--------
file | 16k | 16k | 16k | 16k | 16k | 16k | 16k | ...
------------------------------------------------
(num-threads=5)
(the executed threads are decide by the thread scheduler)
The Hardware Infos:
Arch :x86_64
CPU :4cpu; GenuineIntel 3325.087 MHz
MEMORY :4044128kB
----
Shan Wei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists