lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0706160947350.7557@asgard.lang.hm>
Date:	Sat, 16 Jun 2007 10:16:17 -0700 (PDT)
From:	david@...g.hm
To:	David Greaves <david@...eaves.com>
cc:	Neil Brown <neilb@...e.de>, Wakko Warner <wakko@...mx.eu.org>,
	linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org
Subject: Re: limits on raid

On Sat, 16 Jun 2007, David Greaves wrote:

> david@...g.hm wrote:
>>  On Sat, 16 Jun 2007, Neil Brown wrote:
>>
>>  I want to test several configurations, from a 45 disk raid6 to a 45 disk
>>  raid0. at 2-3 days per test (or longer, depending on the tests) this
>>  becomes a very slow process.
> Are you suggesting the code that is written to enhance data integrity is 
> optimised (or even touched) to support this kind of test scenario?
> Seriously? :)

actually, if it can be done without a huge impact to the maintainability 
of the code I think it would be a good idea for the simple reason that I 
think the increased experimentation would result in people finding out 
what raid level is really appropriate for their needs.

there is a _lot_ of confusion around about what the performance 
implications of different raid levels are (especially when you consider 
things like raid 10/50/60 where you have two layers combined) and anything 
that encourages experimentation would be a good thing.

>>  also, when a rebuild is slow enough (and has enough of a performance
>>  impact) it's not uncommon to want to operate in degraded mode just long
>>  enought oget to a maintinance window and then recreate the array and
>>  reload from backup.
>
> so would mdadm --remove the rebuilding disk help?

no. let me try again

drive fails monday morning

scenerio 1

replace the failed drive, start the rebuild. system will be slow (degraded 
mode + rebuild) for the next three days.

scenerio 2

leave it in degraded mode until monday night (accepting the speed penalty 
for degraded mode, but not the rebuild penalty)

monday night shutdown the system, put in the new drive, reinitialize the 
array, reload the system from backup.

system is back to full speed tuesday morning.

scenerio 2 isn't supported with md today, although it sounds as if the 
skip rebuild could do this except for raid 5

on my test system, the rebuild says it's running at 5M/s a DD to a file on 
the array says it's doing 45M/s (even while the rebuild is running), so it 
seems to me that there may be value in this approach.

David Lang

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ