[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <357d5a28-5163-c52d-2a7d-c6162b56e1cb@huawei.com>
Date: Thu, 7 May 2020 17:16:36 +0800
From: Chao Yu <yuchao0@...wei.com>
To: Jaegeuk Kim <jaegeuk@...nel.org>
CC: Chao Yu <chao@...nel.org>, <kernel-team@...roid.com>,
<linux-kernel@...r.kernel.org>,
<linux-f2fs-devel@...ts.sourceforge.net>
Subject: Re: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer
size
On 2020/5/6 22:56, Jaegeuk Kim wrote:
> On 05/06, Chao Yu wrote:
>> On 2020/5/6 7:05, Jaegeuk Kim wrote:
>>> On 05/05, Chao Yu wrote:
>>>> On 2020-5-4 22:30, Jaegeuk Kim wrote:
>>>>> From: Daeho Jeong <daehojeong@...gle.com>
>>>>>
>>>>> Current zstd compression buffer size is one page and header size less
>>>>> than cluster size. By this, zstd compression always succeeds even if
>>>>> the real compression data is failed to fit into the buffer size, and
>>>>> eventually reading the cluster returns I/O error with the corrupted
>>>>> compression data.
>>>>
>>>> What's the root cause of this issue? I didn't get it.
>>>>
>>>>>
>>>>> Signed-off-by: Daeho Jeong <daehojeong@...gle.com>
>>>>> ---
>>>>> fs/f2fs/compress.c | 4 ++--
>>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
>>>>> index 4c7eaeee52336..a9fa8049b295f 100644
>>>>> --- a/fs/f2fs/compress.c
>>>>> +++ b/fs/f2fs/compress.c
>>>>> @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
>>>>> cc->private = workspace;
>>>>> cc->private2 = stream;
>>>>>
>>>>> - cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
>>>>> + cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
>>>>
>>>> In my machine, the value is 66572 which is much larger than size of dst
>>>> buffer, so, in where we can tell the real size of dst buffer to zstd
>>>> compressor? Otherwise, if compressed data size is larger than dst buffer
>>>> size, when we flush compressed data into dst buffer, we may suffer overflow.
>>>
>>> Could you give it a try compress_log_size=2 and check decompression works?
>>
>> I tried some samples before submitting the patch, did you encounter app's data
>> corruption when using zstd algorithm? If so, let me check this issue.
>
> Daeho reported:
> 1. cp -a src_file comp_dir/dst_file (comp_dir is a directory for compression)
> 2. sync comp_dir/dst_file
> 3. echo 3 > /proc/sys/vm/drop_caches
> 4. cat comp_dir/dst_file > dump (it returns I/O error depending on the file).
* ZSTD_endStream() to complete the flush. It returns the number of bytes left
* in the internal buffer and must be called until it returns 0.
It looks we need to check return value of ZSTD_endStream() to see whether
dst buffer has enough space to store all compressed data.
Thanks,
>
>>
>> Thanks,
>>
>>>
>>>>
>>>>> return 0;
>>>>> }
>>>>>
>>>>> @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
>>>>> ZSTD_inBuffer inbuf;
>>>>> ZSTD_outBuffer outbuf;
>>>>> int src_size = cc->rlen;
>>>>> - int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
>>>>> + int dst_size = cc->clen;
>>>>> int ret;
>>>>>
>>>>> inbuf.pos = 0;
>>>>>
>>>
>>>
>>> _______________________________________________
>>> Linux-f2fs-devel mailing list
>>> Linux-f2fs-devel@...ts.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
>>> .
>>>
> .
>
Powered by blists - more mailing lists