lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 18 Jul 2007 10:56:11 +0100
From:	Rui Santos <rsantos@...popie.com>
To:	Linux RAID <linux-raid@...r.kernel.org>
CC:	Linux Kernel <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...hat.com>, Neil Brown <neilb@...e.de>
Subject: Slow Soft-RAID 5 performance

Hi,

I'm getting a strange slow performance behavior on a recently installed
Server. Here are the details:

Server: Asus AS-TS500-E4A
Board: Asus DSBV-D (
http://uk.asus.com/products.aspx?l1=9&l2=39&l3=299&l4=0&model=1210&modelmenu=2
)
Hard Drives: 3x Seagate ST3400620AS (
http://www.seagate.com/ww/v/index.jsp?vgnextoid=8eff99f4fa74c010VgnVCM100000dd04090aRCRD&locale=en-US
)
I'm using the AHCI driver, although with ata_piix, the behavior is the
same. Here's some info about the AHCI controler:
   
00:1f.2 SATA controller: Intel Corporation 631xESB/632xESB SATA Storage
Controller AHCI (rev 09) (prog-if 01 [AHCI 1.0])
        Subsystem: ASUSTeK Computer Inc. Unknown device 81dc
        Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 19
        I/O ports at 18c0 [size=8]
        I/O ports at 1894 [size=4]
        I/O ports at 1898 [size=8]
        I/O ports at 1890 [size=4]
        I/O ports at 18a0 [size=32]
        Memory at c8000400 (32-bit, non-prefetchable) [size=1K]
        Capabilities: [70] Power Management version 2
        Capabilities: [a8] #12 [0010]


The Kernel boot log is attached as boot.msg

I can get a write throughput of 60 MB/sec on each HD by issuing the
command 'time `dd if=/dev/zero of=test.raw bs=4k count=$(( 1024 * 1024 /
4 )); sync`'

Until this point everything seems acceptable, IMHO. The problem starts
when I test the software-raid on all three HD's.

Configuration: output of 'sfdisk -l'

Disk /dev/sda: 48641 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sda1   *      0+     16      17-    136521   fd  Linux raid autodetect
/dev/sda2         17      82      66     530145   fd  Linux raid autodetect
/dev/sda3         83   48640   48558  390042135   fd  Linux raid autodetect
/dev/sda4          0       -       0          0    0  Empty

Disk /dev/sdb: 48641 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdb1   *      0+     16      17-    136521   fd  Linux raid autodetect
/dev/sdb2         17      82      66     530145   fd  Linux raid autodetect
/dev/sdb3         83   48640   48558  390042135   fd  Linux raid autodetect
/dev/sdb4          0       -       0          0    0  Empty

Disk /dev/sdc: 48641 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdc1   *      0+     16      17-    136521   fd  Linux raid autodetect
/dev/sdc2         17      82      66     530145   fd  Linux raid autodetect
/dev/sdc3         83   48640   48558  390042135   fd  Linux raid autodetect
/dev/sdc4          0       -       0          0    0  Empty

Configuration: output of 'cat /proc/mdstat'

Personalities : [raid6] [raid5] [raid4] [raid0] [raid1] [linear]
md0 : active raid1 sda1[0] sdc1[2] sdb1[1]
      136448 blocks [3/3] [UUU]

md1 : active raid5 sda2[0] sdc2[2] sdb2[1]
      1060096 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

md2 : active raid5 sdc3[2] sda3[0] sdb3[1]
      780083968 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>


The RAID device I'm testing on is /dev/md2. Now, by issuing the same
command 'dd if=/dev/zero of=test.raw bs=4k count=$(( 1024 * 1024 / 4 ));
sync`' on the raid device mount point, I get the following speeds:
With stripe_cache_size at default '265': 51 MB/sec
With stripe_cache_size at '8192': 73 MB/sec


Extra notes:
- All HD have queue_depth at '31', with means NCQ is ON. If I disable
NCQ by setting the value to '1' the write speed achieved is lower.
- Although I started from a fresh openSUSE 10.2 installation, I'm now
running a vanilla 2.6.22.1 kernel
- Kernel is running with Generic-x86-64
- Soft-RAID bitmap is disabled. If Enable it, the performance takes a
serious hit.
- The processor is the Intel  Xeon Dual Core 5060 Family 15 with
Hypertheading activated. If it is deactivated, the performance on this
specific subject is the same.
- Filesystem is ext3


Final quote: Shouldn't I, at least, be able to get write speeds of
120MB/sec instead of the current 73MB/sec? Is this a Soft-RAID problem
or could it be something else ? Or I'm just missing something ?

Thanks for your time,
Rui Santos


View attachment "boot.msg" of type "text/plain" (28044 bytes)

Powered by blists - more mailing lists