lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 4 Dec 2006 14:48:20 +0100
From:	"Martin A. Fink" <fink@....mpg.de>
To:	linux-kernel@...r.kernel.org
Subject: SATA-performance with AHCI

Dear all,

now I was able to do a performance test with an Intel ICH6R chipset.

Basic hardware data:
- Intel Pentium 4 Xeon at 3.2 GHz
- Intel ICH6R chipset, AHCI enabled
- Intel Hyperthreading On and Off
- 1 GB SDDR RAM
- SATA controller onboard (4x)
- SATA harddisks 250 GB

I used SuSE Linux 9.3 with Linux kernel 2.6.11.4-21.14-smp.

I tried to put data from memory to harddisk by writing blocks of 1 MB size 
with 2 GB overall filesize. And I got the following results:

SuSE 9.3 - 32-bit Installation - Hyperthreading Off:
 - CPU time 16% (sys, approx. 0% user) - Write speed 40-45 MB/s
SuSE 9.3 - 64-bit Installation - Hyperthreading Off:
 - CPU time 14% (sys, approx. 0% user) - Write speed 50-60 MB/s
SuSE 9.3 - 64-bit Installation - Hyperthreading ON:
 - CPU time 16% (sys, approx. 0% user) - Write speed 40-50 MB/s

Compared to ICH6R with AHCI OFF the only difference I can see is that with 
AHCI the system seems to reac much faster on keyboard events and screen 
redraw seems to be as fast as normal. It looks like that CPU usage has not 
decreased that dramatically as I would have expected it.

Thus I did a small calculation:
Assuming that the processor gives workloads of (a) 1B (b) 1kB (c) 64kB to the 
DMA controller in AHCI mode to write 45 MB/s to disk, I calculate for 10% CPU 
time usage of the 3.2 GHz Pentium
(a) 10% * 3.2GHz / 45M calls = 7.3 CPU cycles per 1B call to DMA
(b) 10% * 3.2GHz / 45k calls = 7.4E+03 CPU cycles per 1kB call to DMA
(c) 10% * 3.2GHz / 720 calls = 4.8E+05 CPU cycles per 64kB call to DMA

For me (a) looks reasonable (some overhead per byte), but stupid - if 
implemented. Giving bigger packages like (b) and (c) looks better to me, but 
then I can't understand that huge overhead (1E3 to 1E5 cpu cycles per 
package) for one package.

Is this normal or do I still have something wrong in my system?

Thank you for your help,

Martin
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ