lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 3 Nov 2020 19:51:05 +0800
From:   Chao Yu <yuchao0@...wei.com>
To:     David Laight <David.Laight@...LAB.COM>,
        Jaegeuk Kim <jaegeuk@...nel.org>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-f2fs-devel@...ts.sourceforge.net" 
        <linux-f2fs-devel@...ts.sourceforge.net>
Subject: Re: [f2fs-dev] [PATCH] f2fs: compress: support chksum

On 2020/11/3 18:35, David Laight wrote:
> From: Chao Yu
>> Sent: 03 November 2020 02:37
> ...
>>>> Do we need to change fsck.f2fs to recover this?
>>
>> However, we don't know which one is correct, compressed data or chksum value?
>> if compressed data was corrupted, repairing chksum value doesn't help.
>>
>> Or how about adding chksum values for both raw data and compressed data.
> 
> What errors are you trying to detect?

Hi,

The original intention of adding this checksum feature is for code debug
purpose when I develop compress framework in f2fs and add more compress
algorithms into the framework, it helps to find obvious implementation
bug, however this checksum feature was not fully designed, so that I didn't
upstream it at that time.

One other concern is to find any mismatch between original raw data and
persisted data, no matter how it becomes to mismatched and then return
error code to user if it detects the mismatch.

And then fsck can repair mismatched chksum in the condition one persisted
chksum matchs to calculated one, and one other doesn't.

Thanks,

> 
> If there are errors in the data then 'fixing' the checksum is pointless.
> (You've got garbage data - might as well not have the checksum).
> 
> If you are worried about the implementation of the compression
> algorithm then a checksum of the raw data is needed.
> 
> If you want to try error correcting burst errors in the compressed
> data then a crc of the compressed data can be used for error correction.
> 
> OTOH the most likely error is that the file meta-data and data sector
> weren't both committed to disk when the system crashed.
> In which case the checksum has done its job and the file is corrupt.
> fsck should probably move the file to 'lost+found' for manual checking.
> 
> 	David
> 
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ