lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 10 Nov 2009 19:05:19 +0100
From:	Corrado Zoccolo <>
To:	Vivek Goyal <>
Subject: Re: [RFC] Workload type Vs Groups (Was: Re: [PATCH 02/20] blkio: 
	Change CFQ to use CFS like queue time stamps)

On Tue, Nov 10, 2009 at 3:12 PM, Vivek Goyal <> wrote:
> Ok, I ran some simple tests on my NCQ SSD. I had pulled the Jen's branch
> few days back and it has your patches in it.
> I am running three direct sequential readers or prio 0, 4 and 7
> respectively using fio for 10 seconds and then monitoring who got how
> much job done.
> Following is my fio job file
> ****************************************************************
> [global]
> ioengine=sync
> runtime=10
> size=1G
> rw=read
> directory=/mnt/sdc/fio/
> direct=1
> bs=4K
> exec_prerun="echo 3 > /proc/sys/vm/drop_caches"
> [seqread0]
> prio=0
> [seqread4]
> prio=4
> [seqread7]
> prio=7
> ************************************************************************

Can you try without direct and bs?

> Following are the results of 4 runs. Every run lists three jobs of prio0,
> prio4 and prio7 respectively.
> First run
> =========
> read : io=75,996KB, bw=7,599KB/s, iops=1,899, runt= 10001msec
> read : io=95,920KB, bw=9,591KB/s, iops=2,397, runt= 10001msec
> read : io=21,068KB, bw=2,107KB/s, iops=526, runt= 10001msec
> Second run
> ==========
> read : io=103MB, bw=10,540KB/s, iops=2,635, runt= 10001msec
> read : io=102MB, bw=10,479KB/s, iops=2,619, runt= 10001msec
> read : io=720KB, bw=73,728B/s, iops=18, runt= 10000msec
> Third Run
> =========
> read : io=103MB, bw=10,532KB/s, iops=2,632, runt= 10001msec
> read : io=85,728KB, bw=8,572KB/s, iops=2,142, runt= 10001msec
> read : io=19,696KB, bw=1,969KB/s, iops=492, runt= 10001msec
> Fourth Run
> ==========
> read : io=50,060KB, bw=5,005KB/s, iops=1,251, runt= 10001msec
> read : io=102MB, bw=10,409KB/s, iops=2,602, runt= 10001msec
> read : io=54,844KB, bw=5,484KB/s, iops=1,370, runt= 10001msec
> I can't see fairness being provided to processes of diff prio levels. In
> first run prio4 got more BW than prio0 process.
> In second run prio 7 process got completely starved. Based on slice
> calculation, the difference between prio 0 and prio 7 should be 180/40=4.5
> Third run is still better.
> In fourth run again prio 4 got double the BW of prio 0.
> So I can't see how are you achieving fariness on NCQ SSD?
> One more important thing to notice is that throughput of SSD has come down
> significantly. If I just run one job then I get 73MB/s. With these tree
> jobs running, we are achieving close to 19 MB/s.

I think it depends on the hardware. On Jeff's SSD, 32 random readers
were obtaining approximately the same aggregate bandwidth than a
single sequential reader. I think that the decision to avoid idling is
sane on that kind of hardware, but not on the ones like yours, in
which seek has a very large penalty (I have one in my netbook, for
which reading 4k takes 1ms). However, if you increase block size, or
remove the direct I/O, the prefetch should still work for you.
> I think this is happening because of seeks happening almost after every
> dispatch and that brings down the overall throughput. If we had idled
> here, I think probably overall throughput would have been better.
Agreed. In fact, I'd like to add some measurements in cfq, to
determine the idle parameters, instead of relying on those binary
rules of thumbs.
Which hardware is this, btw?

> Thanks
> Vivek
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists