[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E391986.90108@cn.fujitsu.com>
Date: Wed, 03 Aug 2011 17:48:54 +0800
From: Gui Jianfeng <guijianfeng@...fujitsu.com>
To: Shaohua Li <shli@...nel.org>
CC: Vivek Goyal <vgoyal@...hat.com>, Jens Axboe <jaxboe@...ionio.com>,
linux-kernel@...r.kernel.org
Subject: Re: fio posixaio performance problem
On 2011-8-3 16:22, Shaohua Li wrote:
> 2011/8/3 Gui Jianfeng <guijianfeng@...fujitsu.com>:
>> On 2011-8-3 15:38, Shaohua Li wrote:
>>> 2011/8/3 Gui Jianfeng <guijianfeng@...fujitsu.com>:
>>>> Hi,
>>>>
>>>> I ran a fio test to simulate qemu-kvm io behaviour.
>>>> When job number is greater than 2, IO performance is
>>>> really bad.
>>>>
>>>> 1 thread: aggrb=15,129KB/s
>>>> 4 thread: aggrb=1,049KB/s
>>>>
>>>> Kernel: lastest upstream
>>>>
>>>> Any idea?
>>>>
>>>> ---
>>>> [global]
>>>> runtime=30
>>>> time_based=1
>>>> size=1G
>>>> group_reporting=1
>>>> ioengine=posixaio
>>>> exec_prerun='echo 3 > /proc/sys/vm/drop_caches'
>>>> thread=1
>>>>
>>>> [kvmio-1]
>>>> description=kvmio-1
>>>> numjobs=4
>>>> rw=write
>>>> bs=4k
>>>> direct=1
>>>> filename=/mnt/sda4/1G.img
>>> Hmm, the test runs always about 15M/s at my side regardless how many threads.
>>
>> CFQ?
> yes.
>
>> what's the slice_idle value?
> default value. I didn't change it.
Hmm, I use a sata disk, and can reproduce this bug every time...
Thanks,
Gui
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists