lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACsaVZJ-5y7U5xqwL9bof69EKbTk+wrHWFcBFYyP_BwVSt+CNA@mail.gmail.com>
Date:   Mon, 13 Feb 2023 13:11:46 -0800
From:   Kyle Sanderson <kyle.leet@...il.com>
To:     John Stoffel <john@...ffel.org>
Cc:     device-mapper development <dm-devel@...hat.com>,
        linux-raid@...r.kernel.org, Song Liu <song@...nel.org>,
        Linux-Kernel <linux-kernel@...r.kernel.org>
Subject: Re: RAID4 with no striping mode request

> On Mon, Feb 13, 2023 at 11:40 AM John Stoffel <john@...ffel.org> wrote:
>
> >>>>> "Kyle" == Kyle Sanderson <kyle.leet@...il.com> writes:
>
> > hi DM and Linux-RAID,
> > There have been multiple proprietary solutions (some nearly 20 years
> > old now) with a number of (userspace) bugs that are becoming untenable
> > for me as an end user. Basically how they work is a closed MD module
> > (typically administered through DM) that uses RAID4 for a dedicated
> > parity disk across multiple other disks.
>
> You need to explain what you want in *much* beter detail.  Give simple
> concrete examples.  From the sound of it, you want RAID6 but with
> RAID4 dedicated Parity so you can spin down some of the data disks in
> the array?  But if need be, spin up idle disks to recover data if you
> lose an active disk?

No, just a single dedicated parity disk - there's no striping on any
of the data disks. The result of this is you can lose 8 data disks and
the parity disk from an array of 10, and still access the last
remaining disk because each disk maintains a complete copy of their
own data. How the implementations do this is still expose each
individual disk (/dev/md*) that are formatted (+ encrypted)
independently, and when written to, update the parity information on
the dedicated disk. That way, when you add a new disk that's fully
zero'd to the array (parity disk is 16T, new disk is 4T), parity is
preserved. Any bytes written beyond the 4T barrier do not include
those disks in the parity calculation.

> Really hard to understand what exactly you're looking for here.

This might help https://www.snapraid.it/compare . There's at least
hundreds of thousands of these systems out there (based on public
sales from a single vendor), if not well into the millions.

Kyle.

On Mon, Feb 13, 2023 at 11:40 AM John Stoffel <john@...ffel.org> wrote:
>
> >>>>> "Kyle" == Kyle Sanderson <kyle.leet@...il.com> writes:
>
> > hi DM and Linux-RAID,
> > There have been multiple proprietary solutions (some nearly 20 years
> > old now) with a number of (userspace) bugs that are becoming untenable
> > for me as an end user. Basically how they work is a closed MD module
> > (typically administered through DM) that uses RAID4 for a dedicated
> > parity disk across multiple other disks.
>
> You need to explain what you want in *much* beter detail.  Give simple
> concrete examples.  From the sound of it, you want RAID6 but with
> RAID4 dedicated Parity so you can spin down some of the data disks in
> the array?  But if need be, spin up idle disks to recover data if you
> lose an active disk?
>
> Really hard to understand what exactly you're looking for here.
>
>
> > As there is no striping, the maximum size of the protected data is the
> > size of the parity disk (so a set of 4+8+12+16 disks can be protected
> > by a single dedicated 16 disk).When a block is written on any disk,
> > the parity bit is read from the parity disk again, and updated
> > depending on the existing + new bit value (so writing disk + parity
> > disk spun up). Additionally, if enough disks are already spun up, the
> > parity information can be recalculated from all of the spinning disks,
> > resulting in a single write to the parity disk (without a read on the
> > parity, doubling throughput). Finally any of the data disks can be
> > moved around within the array without impacting parity as the layout
> > has not changed. I don't necessarily need all of these features, just
> > the ability to remove a disk and still access the data that was on
> > there by spinning up every other disk until the rebuild is complete is
> > important.
>
> > The benefit of this can be the data disks are all zoned, and you can
> > have a fast parity disk and still maintain excellent performance in
> > the array (limited only by the speed of the disk in question +
> > parity). Additionally, should 2 disks fail, you've either lost the
> > parity and data disk, or 2 data disks with the parity and other disks
> > not lost.
>
> > I was reading through the DM and MD code and it looks like everything
> > may already be there to do this, just needs (significant) stubs to be
> > added to support this mode (or new code). Snapraid is a friendly (and
> > respectable) implementation of this. Unraid and Synology SHR compete
> > in this space, as well as other NAS and enterprise SAN providers.
>
> > Kyle.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ