lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 8 Dec 2006 13:05:23 +0100 From: Jens Axboe <jens.axboe@...cle.com> To: Avantika Mathur <mathur@...ibm.com> Cc: linux-kernel@...r.kernel.org Subject: Re: cfq performance gap On Thu, Dec 07 2006, Avantika Mathur wrote: > Hi Jens, (you probably noticed now, but the axboe@...e.de email is no longer valid) > I've noticed a performance gap between the cfq scheduler and other io > schedulers when running the rawio benchmark. > Results from rawio on 2.6.19, cfq and noop schedulers: > > CFQ: > > procs device num read KB/sec I/O Ops/sec > ----- --------------- ---------- ------- -------------- > 16 /dev/sda 16412 8338 2084 > ----- --------------- ---------- ------- -------------- > 16 16412 8338 2084 > > Total run time 0.492072 seconds > > > NOOP: > > procs device num read KB/sec I/O Ops/sec > ----- --------------- ---------- ------- -------------- > 16 /dev/sda 16399 29224 7306 > ----- --------------- ---------- ------- -------------- > 16 16399 29224 7306 > > Total run time 0.140284 seconds > > The benchmark workload is 16 processes running 4k random reads. > > Is this performance gap a known issue? CFQ could be a little slower at this benchmark, but your results are much worse than I would expect. What is the queueing depth of sda? How are you invoking rawio? Your runtime is very low, how does it look if you allow the test to run for much longer? 30MiB/sec random read bandwidth seems very high, I'm wondering what exactly is being tested here. -- Jens Axboe - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists