lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <45BE1018.4010402@tmr.com>
Date:	Mon, 29 Jan 2007 10:17:44 -0500
From:	Bill Davidsen <davidsen@....com>
To:	Michael Tokarev <mjt@....msk.ru>
CC:	Marc Perkel <mperkel@...oo.com>, linux-kernel@...r.kernel.org
Subject: Re: Raid 10 question/problem [ot]

Michael Tokarev wrote:
> Bill Davidsen wrote:
> []
>> RAID-10 is not the same as RAID 0+1.
> 
> It is.  Yes, there's separate module for raid10, but what it - basically -
> does is the same as raid0 module over two raid1 modules will do.  It's
> just a bit more efficient (less levels, more room for optimisations),
> easy to use (you'll have single array instead of at least 3), and a bit
> more flexible;  at the same way it's less widely tested...
> 
> But the end result is basically the same for both ways.
> 
For values of "same" which exclude consideration of the disk layout, 
throughput, overhead, system administration, and use of spares. Those 
are different. But both methods do write multiple copies of ones and 
zeros to storage media.

Neil brown, 08/23/2005:
- A raid10 can consist of an odd number of drives (if you have a
    cabinet with, say, 8 slots, you can have 1 hot spare, and 7 drives
    in a raid10.  You cannot do that with LVM (or raid0) over raid1).
  - raid10 has a layout ('far') which theoretically can provide
    sequential read throughput that scales by number of drives, rather
    than number of raid1 pairs.  I say 'theoretically' because I think
    there are still issues with the read-balancing code that make this
    hard to get in practice (though increasing the read-ahead seems to
    help).


After about 40 configurations tested, I can say that write performance 
is better as well, for any given stripe cache size up to 4x stripe size. 
I was looking at something else, but the numbers happen to be available.

-- 
Bill Davidsen <davidsen@....com>
   "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ