lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200506145608.GD107238@google.com>
Date:   Wed, 6 May 2020 07:56:08 -0700
From:   Jaegeuk Kim <jaegeuk@...nel.org>
To:     Chao Yu <yuchao0@...wei.com>
Cc:     Chao Yu <chao@...nel.org>, kernel-team@...roid.com,
        linux-kernel@...r.kernel.org,
        linux-f2fs-devel@...ts.sourceforge.net
Subject: Re: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer
 size

On 05/06, Chao Yu wrote:
> On 2020/5/6 7:05, Jaegeuk Kim wrote:
> > On 05/05, Chao Yu wrote:
> >> On 2020-5-4 22:30, Jaegeuk Kim wrote:
> >>> From: Daeho Jeong <daehojeong@...gle.com>
> >>>
> >>> Current zstd compression buffer size is one page and header size less
> >>> than cluster size. By this, zstd compression always succeeds even if
> >>> the real compression data is failed to fit into the buffer size, and
> >>> eventually reading the cluster returns I/O error with the corrupted
> >>> compression data.
> >>
> >> What's the root cause of this issue? I didn't get it.
> >>
> >>>
> >>> Signed-off-by: Daeho Jeong <daehojeong@...gle.com>
> >>> ---
> >>>  fs/f2fs/compress.c | 4 ++--
> >>>  1 file changed, 2 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> >>> index 4c7eaeee52336..a9fa8049b295f 100644
> >>> --- a/fs/f2fs/compress.c
> >>> +++ b/fs/f2fs/compress.c
> >>> @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
> >>>  	cc->private = workspace;
> >>>  	cc->private2 = stream;
> >>>
> >>> -	cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> >>> +	cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
> >>
> >> In my machine, the value is 66572 which is much larger than size of dst
> >> buffer, so, in where we can tell the real size of dst buffer to zstd
> >> compressor? Otherwise, if compressed data size is larger than dst buffer
> >> size, when we flush compressed data into dst buffer, we may suffer overflow.
> > 
> > Could you give it a try compress_log_size=2 and check decompression works?
> 
> I tried some samples before submitting the patch, did you encounter app's data
> corruption when using zstd algorithm? If so, let me check this issue.

Daeho reported:
1. cp -a src_file comp_dir/dst_file (comp_dir is a directory for compression)
2. sync comp_dir/dst_file
3. echo 3 > /proc/sys/vm/drop_caches
4. cat comp_dir/dst_file > dump (it returns I/O error depending on the file).

> 
> Thanks,
> 
> > 
> >>
> >>>  	return 0;
> >>>  }
> >>>
> >>> @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
> >>>  	ZSTD_inBuffer inbuf;
> >>>  	ZSTD_outBuffer outbuf;
> >>>  	int src_size = cc->rlen;
> >>> -	int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> >>> +	int dst_size = cc->clen;
> >>>  	int ret;
> >>>
> >>>  	inbuf.pos = 0;
> >>>
> > 
> > 
> > _______________________________________________
> > Linux-f2fs-devel mailing list
> > Linux-f2fs-devel@...ts.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> > .
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ