lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E39FC2F.4090501@cn.fujitsu.com>
Date:	Thu, 04 Aug 2011 09:55:59 +0800
From:	Gui Jianfeng <guijianfeng@...fujitsu.com>
To:	Vivek Goyal <vgoyal@...hat.com>
CC:	Shaohua Li <shli@...nel.org>, Jens Axboe <jaxboe@...ionio.com>,
	linux-kernel@...r.kernel.org
Subject: Re: fio posixaio performance problem

On 2011-8-4 1:51, Vivek Goyal wrote:
> On Wed, Aug 03, 2011 at 11:45:33AM -0400, Vivek Goyal wrote:
>> On Wed, Aug 03, 2011 at 05:48:54PM +0800, Gui Jianfeng wrote:
>>> On 2011-8-3 16:22, Shaohua Li wrote:
>>>> 2011/8/3 Gui Jianfeng <guijianfeng@...fujitsu.com>:
>>>>> On 2011-8-3 15:38, Shaohua Li wrote:
>>>>>> 2011/8/3 Gui Jianfeng <guijianfeng@...fujitsu.com>:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I ran a fio test to simulate qemu-kvm io behaviour.
>>>>>>> When job number is greater than 2, IO performance is
>>>>>>> really bad.
>>>>>>>
>>>>>>> 1 thread: aggrb=15,129KB/s
>>>>>>> 4 thread: aggrb=1,049KB/s
>>>>>>>
>>>>>>> Kernel: lastest upstream
>>>>>>>
>>>>>>> Any idea?
>>>>>>>
>>>>>>> ---
>>>>>>> [global]
>>>>>>> runtime=30
>>>>>>> time_based=1
>>>>>>> size=1G
>>>>>>> group_reporting=1
>>>>>>> ioengine=posixaio
>>>>>>> exec_prerun='echo 3 > /proc/sys/vm/drop_caches'
>>>>>>> thread=1
>>>>>>>
>>>>>>> [kvmio-1]
>>>>>>> description=kvmio-1
>>>>>>> numjobs=4
>>>>>>> rw=write
>>>>>>> bs=4k
>>>>>>> direct=1
>>>>>>> filename=/mnt/sda4/1G.img
>>>>>> Hmm, the test runs always about 15M/s at my side regardless how many threads.
>>>>>
>>>>> CFQ?
>>>> yes.
>>>>
>>>>> what's the slice_idle value?
>>>> default value. I didn't change it.
>>>
>>> Hmm, I use a sata disk, and can reproduce this bug every time...
>>
>> Do you have blktrace of run with 4 jobs?
> 
> I can't reproduce it too. On my sata disk single thread is getting around
> 23-24MB/s and 4 threads get around 19-20MB/sec. Some of the throughput
> is gone into seeking so that is expected.
> 
> I think what you are trying to point out is idling issue. In your workload
> every thread is doing sync-idle IO. So idling is enabled on each thread.
> On my system I see that next thread preempts the current idle thread 
> because they all are doing IO in nearby area of file and rq_close() is
> true hence preemption is allowed.
> 
> On your system, I think somehow rq_close() is not true hence preemption
> does not take place and we continue to idle on that thread. That also
> is not necessarily too bad but it might be happening that we are waiting
> for completion of IO from some other thread before this thread (we are
> idling on) can do more writes due to some filesystem rescrition and
> that can lead to sudden throughput drop. blktrace will give some idea.

Hi Vivek,

I got the blktrace when start 4 threads. It seems preemption doesn't happen.
And we are idling on one thread for 8ms all the time... I'm also curious why
fio doesn't issue enough io at the thread whose corresponding cfqq is on idling. 
Anyway, I don't want to stop idling on my box... So, any thought?

I extract a piece of blktrace output here.
----
  8,0    1     1218     0.894717976  2781  U   N [iotop] 2
  8,0    1     1219     0.929193602     0  C   R 197622056 + 8 [0]
  8,0    1        0     0.929206942     0  m   N cfq2781S / complete rqnoidle 0
  8,0    1        0     0.929211272     0  m   N cfq2781S / arm_idle: 8 group_idle: 0
  8,0    1        0     0.929212320     0  m   N cfq schedule dispatch
  8,0    1        0     0.936962634     0  m   N cfq idle timer fired
  8,0    1        0     0.936965847     0  m   N cfq2781S / slice expired t=0
  8,0    1        0     0.936968361     0  m   N / served: vt=1151069872 min_vt=1150946992
  8,0    1        0     0.936971295     0  m   N cfq2781S / sl_used=60 disp=5 charge=60 iops=0 sect=288
  8,0    1        0     0.936972831     0  m   N cfq2781S / del_from_rr
  8,0    1        0     0.936974298     0  m   N cfq schedule dispatch
  8,0    1        0     0.936996787     0  m   N cfq workload slice:100
  8,0    1        0     0.936998812     0  m   N cfq12700S / set_active wl_prio:0 wl_type:2
  8,0    1        0     0.937000908     0  m   N cfq12700S / fifo=(null)
  8,0    1        0     0.937002374     0  m   N cfq12700S / dispatch_insert
  8,0    1        0     0.937004679     0  m   N cfq12700S / dispatched a request
  8,0    1        0     0.937006216     0  m   N cfq12700S / activate rq, drv=1
  8,0    1     1220     0.937008171    35  D   W 514703416 + 8 [kblockd/1]
  8,0    1     1221     0.937242492     0  C   W 514703416 + 8 [0]
  8,0    1        0     0.937252269     0  m   N cfq12700S / complete rqnoidle 0
  8,0    1        0     0.937254365     0  m   N cfq12700S / set_slice=100
  8,0    1        0     0.937258136     0  m   N cfq12700S / arm_idle: 8 group_idle: 0
  8,0    1        0     0.937259254     0  m   N cfq schedule dispatch
  8,0    0     2032     0.937429948 12702  A  WS 514703464 + 8 <- (8,4) 91175016
  8,0    0     2033     0.937432532 12702  Q  WS 514703464 + 8 [fio]
  8,0    0     2034     0.937436164 12702  G  WS 514703464 + 8 [fio]
  8,0    0     2035     0.937439516 12702  P   N [fio]
  8,0    0     2036     0.937441542 12702  I   W 514703464 + 8 [fio]
  8,0    0        0     0.937445034     0  m   N cfq12702S / insert_request
  8,0    0        0     0.937446640     0  m   N cfq12702S / add_to_rr
  8,0    0     2037     0.937450272 12702  U   N [fio] 1
  8,0    1        0     0.944890139     0  m   N cfq idle timer fired
  8,0    1        0     0.944893352     0  m   N cfq12700S / slice expired t=0
  8,0    1        0     0.944895936     0  m   N / served: vt=1151086256 min_vt=1151069872
  8,0    1        0     0.944898590     0  m   N cfq12700S / sl_used=8 disp=1 charge=8 iops=0 sect=8
  8,0    1        0     0.944900196     0  m   N cfq12700S / del_from_rr
  8,0    1        0     0.944901593     0  m   N cfq schedule dispatch
  8,0    1        0     0.944932324     0  m   N cfq workload slice:100
  8,0    1        0     0.944934349     0  m   N cfq12702S / set_active wl_prio:0 wl_type:1
  8,0    1        0     0.944936444     0  m   N cfq12702S / fifo=(null)
  8,0    1        0     0.944937911     0  m   N cfq12702S / dispatch_insert
  8,0    1        0     0.944940286     0  m   N cfq12702S / dispatched a request
  8,0    1        0     0.944941822     0  m   N cfq12702S / activate rq, drv=1
  8,0    1     1222     0.944943638    35  D   W 514703464 + 8 [kblockd/1]
  8,0    1     1223     0.945166504     0  C   W 514703464 + 8 [0]
  8,0    1        0     0.945175863     0  m   N cfq12702S / complete rqnoidle 0
  8,0    1        0     0.945177958     0  m   N cfq12702S / set_slice=100
  8,0    1        0     0.945181660     0  m   N cfq12702S / arm_idle: 8 group_idle: 0


Thanks,
Gui

> 
> Thanks
> Vivek
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ