lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 Jun 2007 12:00:51 -0400
From:	Bill Davidsen <davidsen@....com>
To:	David Greaves <david@...eaves.com>
CC:	david@...g.hm, Neil Brown <neilb@...e.de>,
	linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org
Subject: Re: limits on raid

David Greaves wrote:
> david@...g.hm wrote:
>> On Fri, 22 Jun 2007, David Greaves wrote:
>>
>>> That's not a bad thing - until you look at the complexity it brings 
>>> - and then consider the impact and exceptions when you do, eg 
>>> hardware acceleration? md information fed up to the fs layer for 
>>> xfs? simple long term maintenance?
>>>
>>> Often these problems are well worth the benefits of the feature.
>>>
>>> I _wonder_ if this is one where the right thing is to "just say no" :)
>> so for several reasons I don't see this as something that's deserving 
>> of an atomatic 'no'
>>
>> David Lang
>
> Err, re-read it, I hope you'll see that I agree with you - I actually 
> just meant the --assume-clean workaround stuff :)
>
> If you end up 'fiddling' in md because someone specified 
> --assume-clean on a raid5 [in this case just to save a few minutes 
> *testing time* on system with a heavily choked bus!] then that adds 
> *even more* complexity and exception cases into all the stuff you 
> described.

A "few minutes?" Are you reading the times people are seeing with 
multi-TB arrays? Let's see, 5TB at a rebuild rate of 20MB... three days. 
And as soon as you believe that the array is actually "usable" you cut 
that rebuild rate, perhaps in half, and get dog-slow performance from 
the array. It's usable in the sense that reads and writes work, but for 
useful work it's pretty painful. You either fail to understand the 
magnitude of the problem or wish to trivialize it for some reason.

By delaying parity computation until the first write to a stripe only 
the growth of a filesystem is slowed, and all data are protected without 
waiting for the lengthly check. The rebuild speed can be set very low, 
because on-demand rebuild will do most of the work.
>
> I'm very much for the fs layer reading the lower block structure so I 
> don't have to fiddle with arcane tuning parameters - yes, *please* 
> help make xfs self-tuning!
>
> Keeping life as straightforward as possible low down makes the upwards 
> interface more manageable and that goal more realistic... 

Those two paragraphs are mutually exclusive. The fs can be simple 
because it rests on a simple device, even if the "simple device" is 
provided by LVM or md. And LVM and md can stay simple because they rest 
on simple devices, even if they are provided by PATA, SATA, nbd, etc. 
Independent layers make each layer more robust. If you want to 
compromise the layer separation, some approach like ZFS with full 
integration would seem to be promising. Note that layers allow 
specialized features at each point, trading integration for flexibility.

My feeling is that full integration and independent layers each have 
benefits, as you connect the layers to expose operational details you 
need to handle changes in those details, which would seem to make layers 
more complex. What I'm looking for here is better performance in one 
particular layer, the md RAID5 layer. I like to avoid unnecessary 
complexity, but I feel that the current performance suggests room for 
improvement.

-- 
bill davidsen <davidsen@....com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ