lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090901005629.3932.qmail@science.horizon.com>
Date:	31 Aug 2009 20:56:29 -0400
From:	"George Spelvin" <linux@...izon.com>
To:	david@...g.hm, pavel@....cz
Cc:	linux-doc@...r.kernel.org, linux-ext4@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux@...izon.com
Subject: Re: raid is dangerous but that's secret (was Re: [patch] ext2/3:

>From david@...g.hm Mon Aug 31 15:46:19 2009
Date: Mon, 31 Aug 2009 08:45:38 -0700 (PDT)
From: david@...g.hm
X-X-Sender: dlang@...ard.lang.hm
To: Pavel Machek <pavel@....cz>
cc: George Spelvin <linux@...izon.com>, linux-doc@...r.kernel.org,
        linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: raid is dangerous but that's secret (was Re: [patch] ext2/3:
In-Reply-To: <20090831105645.GD1353@....cz>
References: <20090831005426.13607.qmail@...ence.horizon.com> <20090831105645.GD1353@....cz>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

>>> That's one thing I really like about ZFS: its policy of "don't trust
>>> the disks."  If nothing else, simply telling you "your disks f*ed up,
>>> and I caught them doing it", instead of the usual mysterious corruption
>>> detected three months later, is tremendoudly useful information.
>>
>> The more I learn about storage, the more I like idea of zfs. Given the
>> subtle issues between filesystem and raid layer, integrating them just
>> makes sense.
> 
> Note that all that zfs does is tell you that you already lost data (and 
> then only if the checksumming algorithm would be invalid on a blank block 
> being returned), it doesn't protect your data.

Obviously, there are limits, but it does provide useful protection:
- You know where the missing data is.
- The error isn't amplified by believing corrupted metadata
- I seem to recall that ZFS does replicate metadata.
- Corrupted replicas can be "scrubbed" and rewritten from uncorrupted ones.
- If you have some storage redundancy, it can try different mirrors
  to get the data back.

In particular, on a RAID-5 system, ZFS tries dropping out each data disk
in turn to see if the correct data can be reconstructed from the others
+ parity.

One of ZFS's big performance problems is that currently it only checksums
the entire RAID stripe, so it always has to read every drive, and doesn't
get RAID's IOPS advantage.  But that's fairly straightforward to fix.
(It's something of a problem for RAID-5 in general, because reads want
larger chunk sizes to increase the chance that a single read can be
satisfied by one disk, while writes want small chunks so that you can
do whole-stripe writes.)

The fact that the ZFS decelopers observed drives writing the data to the
wrong location emphasizes the importance of keeping the checksum with
the pointer.  An embedded checksum, no matter how good, can't tell you if
the data is stale; you need a way to distinguish versions in the pointer.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ