lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 7 Aug 2008 09:45:02 +0200
From:	Pavel Machek <pavel@...e.cz>
To:	Linas Vepstas <linasvepstas@...il.com>
Cc:	Alan Cox <alan@...rguk.ukuu.org.uk>,
	"Martin K. Petersen" <martin.petersen@...cle.com>,
	John Stoffel <john@...ffel.org>,
	Alistair John Strachan <alistair@...zero.co.uk>,
	linux-kernel@...r.kernel.org
Subject: Re: amd64 sata_nv (massive) memory corruption

Hi!

> >> I'm game. Care to guide me through?  So: on every write, this
> >> new device mapper module computes a checksum and stores
> >> it somewhere. On every read, it computes a checksum and
> >> compares to the stored value. Easy enough I guess.
> >>
> >> Several hard parts:
> >> -- where to store the checksums?
> >
> > That is the million dollar question - plus you can argue it is the fs
> > that should do it. There is stuff crawling through the standards world to
> > provide a small per block additional info area on disk sectors.
> 
> My objection to fs-layer checksums (e.g. in some user-space
> file system) is that it  doesn't leverage the extra info that RAID
> has.  If a block is bad, RAID can probably fetch another one
> that is good. You can't do this at the file-system level.
> 
> I assume I can layer device-mappers anywhere, right?
> Layering one *underneath* md-raid would allow it to
> reject/discard bad blocks, and then let the raid layer
> try to find a good block somewhere else.
> 
> I assume that a device mapper can alter the number
> of blocks-in to the number of blocks-out; that it doesn't
> have to be 1-1. Then for every 10 sectors of data, it
> would use 11 sectors of storage, one holding the
> checksum.  I'm very naive about how the block layer
> works, so I don't know what snags there might be.

I did something like that long time ago -- with loop, and separate
partition for checksums.
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ