lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0706181111020.12717@asgard.lang.hm>
Date:	Mon, 18 Jun 2007 11:12:45 -0700 (PDT)
From:	david@...g.hm
To:	Lennart Sorensen <lsorense@...lub.uwaterloo.ca>
cc:	Brendan Conoboy <blc@...hat.com>, Neil Brown <neilb@...e.de>,
	Wakko Warner <wakko@...mx.eu.org>,
	linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org
Subject: Re: limits on raid

On Mon, 18 Jun 2007, Lennart Sorensen wrote:

> On Mon, Jun 18, 2007 at 10:28:38AM -0700, david@...g.hm wrote:
>> I plan to test the different configurations.
>>
>> however, if I was saturating the bus with the reconstruct how can I fire
>> off a dd if=/dev/zero of=/mnt/test and get ~45M/sec whild only slowing the
>> reconstruct to ~4M/sec?
>>
>> I'm putting 10x as much data through the bus at that point, it would seem
>> to proove that it's not the bus that's saturated.
>
> dd 45MB/s from the raid sounds reasonable.
>
> If you have 45 drives, doing a resync of raid5 or radi6 should probably
> involve reading all the disks, and writing new parity data to one drive.
> So if you are writing 5MB/s, then you are reading 44*5MB/s from the
> other drives, which is 220MB/s.  If your resync drops to 4MB/s when
> doing dd, then you have 44*4MB/s which is 176MB/s or 44MB/s less read
> capacity, which surprisingly seems to match the dd speed you are
> getting.  Seems like you are indeed very much saturating a bus
> somewhere.  The numbers certainly agree with that theory.
>
> What kind of setup is the drives connected to?

simple ultra-wide SCSI to a single controller.

I didn't realize that the rate reported by /proc/mdstat was the write 
speed that was takeing place, I thought it was the total data rate (reads 
+ writes). the next time this message gets changed it would be a good 
thing to clarify this.

David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ