lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAMCDeeHxMBoVkNYAyssjgjo4=FYd2NonS-mqC7OUEL89B9Cig@mail.gmail.com>
Date:   Tue, 14 Feb 2023 16:28:36 -0600
From:   Roger Heflin <rogerheflin@...il.com>
To:     Heinz Mauelshagen <heinzm@...hat.com>
Cc:     Kyle Sanderson <kyle.leet@...il.com>, linux-raid@...r.kernel.org,
        Song Liu <song@...nel.org>,
        device-mapper development <dm-devel@...hat.com>,
        John Stoffel <john@...ffel.org>,
        Linux-Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [dm-devel] RAID4 with no striping mode request

On Tue, Feb 14, 2023 at 3:27 PM Heinz Mauelshagen <heinzm@...hat.com> wrote:
>

>
>
> ...which is RAID1 plus a parity disk which seems superfluous as you achieve (N-1)
> resilience against single device failures already without the later.
>
> What would you need such parity disk for?
>
> Heinz
>

I thought that at first too, but threw that idea out as it did not
make much sense.

What he appears to want is 8 linear non-striped data disks + a parity disk.

Such that you can lose any one data disk and parity can rebuild that
disk.  And if you lose several data diskis, then you have intact
non-striped data for the remaining disks.

It would almost seem that you would need to put a separate filesystem
on each data disk/section (or have a filesystem that is redundant
enough to survive) otherwise losing an entire data disk would leave
the filesystem in a mess..

So N filesystems + a parity disk for the data on the N separate
filesystems.   And each write needs you to read the data from the disk
you are writing to, and the parity and recalculate the new parity and
write out the data and new parity.

If the parity disk was an SSD it would be fast enough, but if parity
was an SSD I would expect it to get used up/burned out from all of
parity being re-written for each write on each disk unless you bought
an expensive high-write ssd.

The only advantage of the setup is that if you lose too many disks you
still have some data.

It is not clear to me that it would be any cheaper if parity needs to
be a normal ssd's (since ssds are about 4x the price/gb and high-write
ones are even more) than a classic bunch of mirrors, or even say a 4
disks raid6 where you can lose any 2 and still have data.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ