lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aJFvjBQv8PzJeFNN@google.com>
Date: Tue, 5 Aug 2025 02:42:20 +0000
From: Jaegeuk Kim <jaegeuk@...nel.org>
To: Chao Yu <chao@...nel.org>
Cc: linux-f2fs-devel@...ts.sourceforge.net, linux-kernel@...r.kernel.org,
	Jan Prusakowski <jprusakowski@...gle.com>
Subject: Re: [PATCH v2 3/3] f2fs: fix to zero data after EOF for compressed
 file correctly

Chao, it seems you made a clean up before this? Can you post it first?

On 08/04, Chao Yu wrote:
> generic/091 may fail, then it bisects to the bad commit ba8dac350faf
> ("f2fs: fix to zero post-eof page").
> 
> What will cause generic/091 to fail is something like below Testcase #1:
> 1. write 16k as compressed blocks
> 2. truncate to 12k
> 3. truncate to 20k
> 4. verify data in range of [12k, 16k], however data is not zero as
> expected
> 
> Script of Testcase #1
> mkfs.f2fs -f -O extra_attr,compression /dev/vdb
> mount -t f2fs -o compress_extension=* /dev/vdb /mnt/f2fs
> dd if=/dev/zero of=/mnt/f2fs/file bs=12k count=1
> dd if=/dev/random of=/mnt/f2fs/file bs=4k count=1 seek=3 conv=notrunc
> sync
> truncate -s $((12*1024)) /mnt/f2fs/file
> truncate -s $((20*1024)) /mnt/f2fs/file
> dd if=/mnt/f2fs/file of=/mnt/f2fs/data bs=4k count=1 skip=3
> od /mnt/f2fs/data
> umount /mnt/f2fs
> 
> Analisys:
> in step 2), we will redirty all data pages from #0 to #3 in compressed
> cluster, and zero page #3,
> in step 3), f2fs_setattr() will call f2fs_zero_post_eof_page() to drop
> all page cache post eof, includeing dirtied page #3,
> in step 4) when we read data from page #3, it will decompressed cluster
> and extra random data to page #3, finally, we hit the non-zeroed data
> post eof.
> 
> However, the commit ba8dac350faf ("f2fs: fix to zero post-eof page") just
> let the issue be reproduced easily, w/o the commit, it can reproduce this
> bug w/ below Testcase #2:
> 1. write 16k as compressed blocks
> 2. truncate to 8k
> 3. truncate to 12k
> 4. truncate to 20k
> 5. verify data in range of [12k, 16k], however data is not zero as
> expected
> 
> Script of Testcase #2
> mkfs.f2fs -f -O extra_attr,compression /dev/vdb
> mount -t f2fs -o compress_extension=* /dev/vdb /mnt/f2fs
> dd if=/dev/zero of=/mnt/f2fs/file bs=12k count=1
> dd if=/dev/random of=/mnt/f2fs/file bs=4k count=1 seek=3 conv=notrunc
> sync
> truncate -s $((8*1024)) /mnt/f2fs/file
> truncate -s $((12*1024)) /mnt/f2fs/file
> truncate -s $((20*1024)) /mnt/f2fs/file
> echo 3 > /proc/sys/vm/drop_caches
> dd if=/mnt/f2fs/file of=/mnt/f2fs/data bs=4k count=1 skip=3
> od /mnt/f2fs/data
> umount /mnt/f2fs
> 
> Anlysis:
> in step 2), we will redirty all data pages from #0 to #3 in compressed
> cluster, and zero page #2 and #3,
> in step 3), we will truncate page #3 in page cache,
> in step 4), expand file size,
> in step 5), hit random data post eof w/ the same reason in Testcase #1.
> 
> Root Cause:
> In f2fs_truncate_partial_cluster(), after we truncate partial data block
> on compressed cluster, all pages in cluster including the one post eof
> will be dirtied, after another tuncation, dirty page post eof will be
> dropped, however on-disk compressed cluster is still valid, it includes
> invalid data post eof, result in exposing previous data post eof while
> reading.
> 
> Fix:
> In f2fs_truncate_partial_cluster(), let change as below to fix:
> - call filemap_write_and_wait_range() to flush dirty page
> - call truncate_pagecache() to drop pages or zero partial page post eof
> - call f2fs_do_truncate_blocks() to truncate non-compress cluster to
>   last vali block
> 
> Fixes: 3265d3db1f16 ("f2fs: support partial truncation on compressed inode")
> Reported-by: Jan Prusakowski <jprusakowski@...gle.com>
> Signed-off-by: Chao Yu <chao@...nel.org>
> ---
> v2:
> - should dirty & flush all pages in cluster and truncate blocks post eof
> later
>  fs/f2fs/compress.c | 20 ++++++++++++++------
>  1 file changed, 14 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> index e37a7ed801e5..6ad8d3bc6df7 100644
> --- a/fs/f2fs/compress.c
> +++ b/fs/f2fs/compress.c
> @@ -1245,17 +1245,25 @@ int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock)
>  	for (i = (1 << log_cluster_size) - 1; i >= 0; i--) {
>  		struct folio *folio = page_folio(rpages[i]);
>  		loff_t start = (loff_t)folio->index << PAGE_SHIFT;
> +		loff_t offset = from > start ? from - start : 0;
>  
> -		if (from > start) {
> -			folio_zero_segment(folio, from - start,
> -					folio_size(folio));
> +		folio_zero_segment(folio, offset, folio_size(folio));
> +
> +		if (from >= start)
>  			break;
> -		}
> -		folio_zero_segment(folio, 0, folio_size(folio));
>  	}
>  
>  	f2fs_compress_write_end(inode, fsdata, start_idx, true);
> -	return 0;
> +
> +	err = filemap_write_and_wait_range(inode->i_mapping,
> +			round_down(from, 1 << log_cluster_size << PAGE_SHIFT),
> +			LLONG_MAX);
> +	if (err)
> +		return err;
> +
> +	truncate_pagecache(inode, from);
> +
> +	return f2fs_do_truncate_blocks(inode, round_up(from, PAGE_SIZE), lock);
>  }
>  
>  static int f2fs_write_compressed_pages(struct compress_ctx *cc,
> -- 
> 2.49.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ