[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YtAcceR/K/2eFqN4@B-P7TQMD6M-0146.local>
Date: Thu, 14 Jul 2022 21:38:57 +0800
From: Gao Xiang <hsiangkao@...ux.alibaba.com>
To: linux-erofs@...ts.ozlabs.org, Chao Yu <chao@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 00/16] erofs: prepare for folios, duplication and kill
PG_error
On Thu, Jul 14, 2022 at 09:20:35PM +0800, Gao Xiang wrote:
> Hi folks,
>
> I've been doing this for almost 2 months, the main point of this is
> to support large folios and rolling hash deduplication for compressed
> data.
>
> This patchset is as a start of this work targeting for the next 5.20,
> it introduces a flexable range representation for (de)compressed buffers
> instead of too relying on page(s) directly themselves, so large folios
> can laterly base on this work. Also, this patchset gets rid of all
> PG_error flags in the decompression code. It's a cleanup as a result
> as well.
>
> In addition, this patchset kicks off rolling hash deduplication for
> compressed data by introducing fully-referenced multi-reference
> pclusters first instead of reporting fs corruption if one pcluster
> is introduced by several differnt extents. The full implementation
> is expected to be finished in the merge window after the next. One
> of my colleagues is actively working on the userspace part of this
> feature.
>
> However, it's still easy to verify fully-referenced multi-reference
> pcluster by constructing some image by hand (see attachment):
>
> Dataset: 300M
> seq-read (data-deduplicated, read_ahead_kb 8192): 1095MiB/s
> seq-read (data-deduplicated, read_ahead_kb 4096): 771MiB/s
> seq-read (data-deduplicated, read_ahead_kb 512): 577MiB/s
> seq-read (vanilla, read_ahead_kb 8192): 364MiB/s
>
Testdata above as attachment for reference.
Download attachment "pat.erofs.xz" of type "application/octet-stream" (12212 bytes)
Powered by blists - more mailing lists