[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200504143039.155644-1-jaegeuk@kernel.org>
Date: Mon, 4 May 2020 07:30:39 -0700
From: Jaegeuk Kim <jaegeuk@...nel.org>
To: linux-kernel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net, kernel-team@...roid.com
Cc: Daeho Jeong <daehojeong@...gle.com>
Subject: [PATCH] f2fs: change maximum zstd compression buffer size
From: Daeho Jeong <daehojeong@...gle.com>
Current zstd compression buffer size is one page and header size less
than cluster size. By this, zstd compression always succeeds even if
the real compression data is failed to fit into the buffer size, and
eventually reading the cluster returns I/O error with the corrupted
compression data.
Signed-off-by: Daeho Jeong <daehojeong@...gle.com>
---
fs/f2fs/compress.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index 4c7eaeee52336..a9fa8049b295f 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
cc->private = workspace;
cc->private2 = stream;
- cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
+ cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
return 0;
}
@@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
ZSTD_inBuffer inbuf;
ZSTD_outBuffer outbuf;
int src_size = cc->rlen;
- int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
+ int dst_size = cc->clen;
int ret;
inbuf.pos = 0;
--
2.26.2.526.g744177e7f7-goog
Powered by blists - more mailing lists