[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aa11afd31edb42979c03d2a27ed9e850@AcuMS.aculab.com>
Date: Tue, 3 Nov 2020 10:35:13 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Chao Yu' <yuchao0@...wei.com>, Jaegeuk Kim <jaegeuk@...nel.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-f2fs-devel@...ts.sourceforge.net"
<linux-f2fs-devel@...ts.sourceforge.net>
Subject: RE: [f2fs-dev] [PATCH] f2fs: compress: support chksum
From: Chao Yu
> Sent: 03 November 2020 02:37
...
> >> Do we need to change fsck.f2fs to recover this?
>
> However, we don't know which one is correct, compressed data or chksum value?
> if compressed data was corrupted, repairing chksum value doesn't help.
>
> Or how about adding chksum values for both raw data and compressed data.
What errors are you trying to detect?
If there are errors in the data then 'fixing' the checksum is pointless.
(You've got garbage data - might as well not have the checksum).
If you are worried about the implementation of the compression
algorithm then a checksum of the raw data is needed.
If you want to try error correcting burst errors in the compressed
data then a crc of the compressed data can be used for error correction.
OTOH the most likely error is that the file meta-data and data sector
weren't both committed to disk when the system crashed.
In which case the checksum has done its job and the file is corrupt.
fsck should probably move the file to 'lost+found' for manual checking.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists