[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201204074323.GA2025226@xiangao.remote.csb>
Date: Fri, 4 Dec 2020 15:43:23 +0800
From: Gao Xiang <hsiangkao@...hat.com>
To: Chao Yu <yuchao0@...wei.com>
Cc: Eric Biggers <ebiggers@...nel.org>, jaegeuk@...nel.org,
linux-kernel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net
Subject: Re: [f2fs-dev] [PATCH v6] f2fs: compress: support compress level
Hi Chao,
On Fri, Dec 04, 2020 at 03:09:20PM +0800, Chao Yu wrote:
> On 2020/12/4 8:31, Gao Xiang wrote:
> > could make more sense), could you leave some CR numbers about these
> > algorithms on typical datasets (enwik9, silisia.tar or else.) with 16k
> > cluster size?
>
> Just from a quick test with enwik9 on vm:
>
> Original blocks: 244382
>
> lz4 lz4hc-9
> compressed blocks 170647 163270
> compress ratio 69.8% 66.8%
> speed 16.4207 s, 60.9 MB/s 26.7299 s, 37.4 MB/s
>
> compress ratio = after / before
Thanks for the confirmation. it'd be better to add this to commit message
if needed when adding a new algorithm to show the benefits.
About the speed, I think which is also limited to storage device and other
conditions (I mean the CPU loading during the writeback might be different
between lz4 and lz4hc-9 due to many other bounds, e.g. UFS 3.0 seq
write is somewhat higher vs VM. lz4 may have higher bandwidth on high
level devices since it seems some IO bound here... I guess but not sure,
since pure in-memory lz4 is fast according to lzbench / lz4 homepage.)
Anyway, it's up to f2fs folks if it's useful :) (the CR number is what
I expect though... I'm a bit of afraid the CPU runtime loading.)
Thanks for your time!
Thanks,
Gao Xiang
>
> Thanks,
>
Powered by blists - more mailing lists