lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACsaVZJvXpCt37nQOoe8qd1EPUpfdMM1HwHk9tVO8HdU_Azhhw@mail.gmail.com>
Date:   Sun, 12 Feb 2023 19:07:05 -0800
From:   Kyle Sanderson <kyle.leet@...il.com>
To:     device-mapper development <dm-devel@...hat.com>,
        linux-raid@...r.kernel.org
Cc:     Song Liu <song@...nel.org>,
        Linux-Kernel <linux-kernel@...r.kernel.org>
Subject: RAID4 with no striping mode request

hi DM and Linux-RAID,

There have been multiple proprietary solutions (some nearly 20 years
old now) with a number of (userspace) bugs that are becoming untenable
for me as an end user. Basically how they work is a closed MD module
(typically administered through DM) that uses RAID4 for a dedicated
parity disk across multiple other disks.

As there is no striping, the maximum size of the protected data is the
size of the parity disk (so a set of 4+8+12+16 disks can be protected
by a single dedicated 16 disk).When a block is written on any disk,
the parity bit is read from the parity disk again, and updated
depending on the existing + new bit value (so writing disk + parity
disk spun up). Additionally, if enough disks are already spun up, the
parity information can be recalculated from all of the spinning disks,
resulting in a single write to the parity disk (without a read on the
parity, doubling throughput). Finally any of the data disks can be
moved around within the array without impacting parity as the layout
has not changed. I don't necessarily need all of these features, just
the ability to remove a disk and still access the data that was on
there by spinning up every other disk until the rebuild is complete is
important.

The benefit of this can be the data disks are all zoned, and you can
have a fast parity disk and still maintain excellent performance in
the array (limited only by the speed of the disk in question +
parity). Additionally, should 2 disks fail, you've either lost the
parity and data disk, or 2 data disks with the parity and other disks
not lost.

I was reading through the DM and MD code and it looks like everything
may already be there to do this, just needs (significant) stubs to be
added to support this mode (or new code). Snapraid is a friendly (and
respectable) implementation of this. Unraid and Synology SHR compete
in this space, as well as other NAS and enterprise SAN providers.

Kyle.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ