lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1271420878.24780.145.camel@tucsk.pomaz.szeredi.hu>
Date:	Fri, 16 Apr 2010 14:27:58 +0200
From:	Miklos Szeredi <mszeredi@...e.cz>
To:	Jens Axboe <jens.axboe@...cle.com>
Cc:	linux-kernel <linux-kernel@...r.kernel.org>,
	Jan Kara <jack@...e.cz>, Suresh Jayaraman <sjayaraman@...e.de>
Subject: CFQ read performance regression

Hi Jens,

I'm chasing a performance bottleneck identified by tiobench that seems
to be caused by CFQ.  On a SLES10-SP3 kernel (2.6.16, with some patches
moving cfq closer to 2.6.17) tiobench with 8 threads gets about 260MB/s
sequential read throughput.  On a recent kernels (including vanilla
2.6.34-rc) it makes about 145MB/s, a regression of 45%.  The queue and
readahead parameters are the same.

This goes back some time, 2.6.27 already seems to have a bad
performance.

Changing the scheduler to noop will increase the throughput back into
the 260MB/s range.  So this is not a driver issue.

Also increasing quantum *and* readahead will increase the throughput,
but not by as much.  Both noop and these tweaks decrease the write
throughput somewhat however...

Apparently on recent kernels the number of dispatched requests stays
mostly at or below 4 and the dispatched sector count at or below 2000,
which is not enough to fill the bandwidth on this setup.

On 2.6.16 the number of dispatched requests hovers around 22 and the
sector count around 16000.

I uploaded blktraces for the read part of the tiobench runs for both
2.6.16 and 2.6.32:

 http://www.kernel.org/pub/linux/kernel/people/mszeredi/blktrace/

Do you have any idea about the cause of this regression?

Thanks,
Miklos

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ