lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Mar 2007 21:58:49 -0700 (PDT)
From:	Davide Libenzi <davidel@...ilserver.org>
To:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
cc:	Ingo Molnar <mingo@...e.hu>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Jens Axboe <jens.axboe@...cle.com>
Subject: AIO, FIO and Threads ...


I was looking at Jens FIO stuff, and I decided to cook a quick patch for 
FIO to support GUASI (Generic Userspace Asyncronous Syscall Interface):

http://www.xmailserver.org/guasi-lib.html

I then ran a few tests on my Dual Opteron 252 with SATA drives (sata_nv) 
and 8GB of RAM.
Mind that I'm not FIO expert, like at all, but I got some interesting 
results when comparing GUASI with libaio at 8/1000/10000 depths.
If I read those result correctly (Jens may help), GUASI output is more 
then double the libaio one.
Lots of context switches, yes. But the throughput looks like 2+ times.
Can someone try to repeat the measures and/or spot the error?
Or tell me which other tests to run?
This is kinda a suprise for me ...



PS: FIO patch to support GUASI is attached. You also need to fetch GUASI 
    and (configure && make install)



- Davide



>> fio --name=global --rw=randread --size=64m --ioengine=guasi --name=job1 --iodepth=8 --thread

job1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=guasi, iodepth=8
Starting 1 thread
Jobs: 1: [r] [100.0% done] [  3135/     0 kb/s] [eta 00m:00s]
job1: (groupid=0, jobs=1): err= 0: pid=29298
  read : io=65,536KiB, bw=1,576KiB/s, iops=384, runt= 42557msec
    slat (msec): min=    0, max=    0, avg= 0.00, stdev= 0.00
    clat (msec): min=    0, max=  212, avg=20.26, stdev=18.83
    bw (KiB/s) : min= 1166, max= 3376, per=98.51%, avg=1552.50, stdev=317.42
  cpu          : usr=7.69%, sys=92.99%, ctx=97648
  IO depths    : 1=0.0%, 2=0.0%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%, >=64=0.0%
     lat (msec): 2=1.4%, 4=3.6%, 10=25.3%, 20=34.0%, 50=28.1%, 100=6.8%
     lat (msec): 250=0.8%, 500=0.0%, 750=0.0%, 1000=0.0%, >=2000=0.0%

Run status group 0 (all jobs):
   READ: io=65,536KiB, aggrb=1,576KiB/s, minb=1,576KiB/s, maxb=1,576KiB/s, mint=42557msec, maxt=42557msec

Disk stats (read/write):
  sda: ios=16376/98, merge=8/135, ticks=339481/2810, in_queue=342290, util=99.17%


>> fio --name=global --rw=randread --size=64m --ioengine=libaio --name=job1 --iodepth=8 --thread

job1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=8
Starting 1 thread
Jobs: 1: [r] [95.9% done] [  2423/     0 kb/s] [eta 00m:03s]
job1: (groupid=0, jobs=1): err= 0: pid=29332
  read : io=65,536KiB, bw=929KiB/s, iops=226, runt= 72181msec
    slat (msec): min=    0, max=   98, avg=31.30, stdev=15.53
    clat (msec): min=    0, max=    0, avg= 0.00, stdev= 0.00
    bw (KiB/s) : min=  592, max= 2835, per=98.56%, avg=915.58, stdev=325.29
  cpu          : usr=0.02%, sys=0.34%, ctx=23023
  IO depths    : 1=22.2%, 2=22.2%, 4=44.4%, 8=11.1%, 16=0.0%, 32=0.0%, >=64=0.0%
     lat (msec): 2=100.0%, 4=0.0%, 10=0.0%, 20=0.0%, 50=0.0%, 100=0.0%
     lat (msec): 250=0.0%, 500=0.0%, 750=0.0%, 1000=0.0%, >=2000=0.0%

Run status group 0 (all jobs):
   READ: io=65,536KiB, aggrb=929KiB/s, minb=929KiB/s, maxb=929KiB/s, mint=72181msec, maxt=72181msec

Disk stats (read/write):
  sda: ios=16384/43, merge=0/42, ticks=71889/20573, in_queue=92461, util=99.57%


>> fio --name=global --rw=randread --size=64m --ioengine=guasi --name=job1 --iodepth=1000 --thread

job1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=guasi, iodepth=1000
Starting 1 thread
Jobs: 1: [r] [93.9% done] [   815/     0 kb/s] [eta 00m:02s]
job1: (groupid=0, jobs=1): err= 0: pid=29343
  read : io=65,536KiB, bw=2,130KiB/s, iops=520, runt= 31500msec
    slat (msec): min=    0, max=   26, avg= 1.02, stdev= 4.19
    clat (msec): min=   12, max=28024, avg=1920.73, stdev=764.20
    bw (KiB/s) : min= 1139, max= 3376, per=95.21%, avg=2027.87, stdev=354.38
  cpu          : usr=7.35%, sys=93.77%, ctx=104637
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=0.2%, >=64=99.6%
     lat (msec): 2=0.0%, 4=0.0%, 10=0.0%, 20=0.0%, 50=0.1%, 100=0.4%
     lat (msec): 250=1.2%, 500=1.0%, 750=0.8%, 1000=0.7%, >=2000=45.5%

Run status group 0 (all jobs):
   READ: io=65,536KiB, aggrb=2,130KiB/s, minb=2,130KiB/s, maxb=2,130KiB/s, mint=31500msec, maxt=31500msec

Disk stats (read/write):
  sda: ios=16267/31, merge=115/28, ticks=4019824/313471, in_queue=4333625, util=98.84%


>> fio --name=global --rw=randread --size=64m --ioengine=libaio --name=job1 --iodepth=1000 --thread

job1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=1000
Starting 1 thread
Jobs: 1: [r] [98.6% done] [  4083/     0 kb/s] [eta 00m:01s]]
job1: (groupid=0, jobs=1): err= 0: pid=30346
  read : io=65,536KiB, bw=920KiB/s, iops=224, runt= 72925msec
    slat (msec): min=    0, max= 5539, avg=4431.27, stdev=1268.03
    clat (msec): min=    0, max=    0, avg= 0.00, stdev= 0.00
    bw (KiB/s) : min=    0, max= 2361, per=103.56%, avg=952.75, stdev=499.54
  cpu          : usr=0.02%, sys=0.39%, ctx=23089
  IO depths    : 1=0.2%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4%
     lat (msec): 2=100.0%, 4=0.0%, 10=0.0%, 20=0.0%, 50=0.0%, 100=0.0%
     lat (msec): 250=0.0%, 500=0.0%, 750=0.0%, 1000=0.0%, >=2000=0.0%

Run status group 0 (all jobs):
   READ: io=65,536KiB, aggrb=920KiB/s, minb=920KiB/s, maxb=920KiB/s, mint=72925msec, maxt=72925msec

Disk stats (read/write):
  sda: ios=16384/70, merge=0/54, ticks=72644/31038, in_queue=103682, util=99.61%


>> fio --name=global --rw=randread --size=64m --ioengine=guasi --name=job1 --iodepth=10000 --thread

job1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=guasi, iodepth=10000
Starting 1 thread
Jobs: 1: [r] [100.0% done] [ 40752/     0 kb/s] [eta 00m:00s]
job1: (groupid=0, jobs=1): err= 0: pid=32203
  read : io=65,536KiB, bw=1,965KiB/s, iops=479, runt= 34148msec
    slat (msec): min=    0, max=  323, avg=124.06, stdev=112.39
    clat (msec): min=    0, max=33982, avg=20686.86, stdev=13689.22
    bw (KiB/s) : min=    1, max= 2187, per=94.75%, avg=1861.75, stdev=392.89
  cpu          : usr=0.35%, sys=2.42%, ctx=166667
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=0.2%, >=64=99.6%
     lat (msec): 2=0.0%, 4=0.0%, 10=0.0%, 20=0.1%, 50=0.5%, 100=1.5%
     lat (msec): 250=5.0%, 500=5.6%, 750=1.8%, 1000=0.8%, >=2000=2.3%

Run status group 0 (all jobs):
   READ: io=65,536KiB, aggrb=1,965KiB/s, minb=1,965KiB/s, maxb=1,965KiB/s, mint=34148msec, maxt=34148msec

Disk stats (read/write):
  sda: ios=16064/122, merge=319/73, ticks=4350268/172548, in_queue=4521657, util=98.95%



>> fio --name=global --rw=randread --size=64m --ioengine=libaio --name=job1 --iodepth=10000 --thread

job1: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=10000
Starting 1 thread
Jobs: 1: [r] [61.3% done] [     0/     0 kb/s] [eta 00m:46s]]
job1: (groupid=0, jobs=1): err= 0: pid=9791
  read : io=65,536KiB, bw=917KiB/s, iops=224, runt= 73118msec
    slat (msec): min=    1, max=52656, avg=40082.23, stdev=15703.83
    clat (msec): min=    0, max=    3, avg= 2.61, stdev= 0.49
    bw (KiB/s) : min=    0, max= 2002, per=109.16%, avg=1001.00, stdev=1415.63
  cpu          : usr=0.02%, sys=0.40%, ctx=23095
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
     lat (msec): 2=0.0%, 4=100.0%, 10=0.0%, 20=0.0%, 50=0.0%, 100=0.0%
     lat (msec): 250=0.0%, 500=0.0%, 750=0.0%, 1000=0.0%, >=2000=0.0%

Run status group 0 (all jobs):
   READ: io=65,536KiB, aggrb=917KiB/s, minb=917KiB/s, maxb=917KiB/s, mint=73118msec, maxt=73118msec

Disk stats (read/write):
  sda: ios=16384/82, merge=0/86, ticks=72720/36477, in_queue=109197, util=99.44%


View attachment "fio-guasi-0.2.diff" of type "TEXT/x-diff" (8085 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ