lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 Jun 2007 13:51:36 -0700 (PDT)
From:	david@...g.hm
To:	Lennart Sorensen <lsorense@...lub.uwaterloo.ca>
cc:	Wakko Warner <wakko@...mx.eu.org>,
	Brendan Conoboy <blc@...hat.com>, Neil Brown <neilb@...e.de>,
	linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org
Subject: Re: limits on raid

On Tue, 19 Jun 2007, Lennart Sorensen wrote:

> On Mon, Jun 18, 2007 at 02:56:10PM -0700, david@...g.hm wrote:
>> yes, I'm useing promise drive shelves, I have them configured to export
>> the 15 drives as 15 LUNs on a single ID.
>>
>> I'm going to be useing this as a huge circular buffer that will just be
>> overwritten eventually 99% of the time, but once in a while I will need to
>> go back into the buffer and extract and process the data.
>
> I would guess that if you ran 15 drives per channel on 3 different
> channels, you would resync in 1/3 the time.  Well unless you end up
> saturating the PCI bus instead.
>
> hardware raid of course has an advantage there in that it doesn't have
> to go across the bus to do the work (although if you put 45 drives on
> one scsi channel on hardware raid, it will still be limited).

I fully realize that the channel will be the bottleneck, I just didn't 
understand what /proc/mdstat was telling me. I thought that it was telling 
me that the resync was processing 5M/sec, not that it was writing 5M/sec 
on each of the two parity locations.

David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists