[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ffca7f07-653f-1270-72d4-e66ffc8a7473@huawei.com>
Date: Fri, 4 Dec 2020 16:50:14 +0800
From: Chao Yu <yuchao0@...wei.com>
To: Gao Xiang <hsiangkao@...hat.com>
CC: Eric Biggers <ebiggers@...nel.org>, <jaegeuk@...nel.org>,
<linux-kernel@...r.kernel.org>,
<linux-f2fs-devel@...ts.sourceforge.net>
Subject: Re: [f2fs-dev] [PATCH v6] f2fs: compress: support compress level
Hi Xiang,
On 2020/12/4 15:43, Gao Xiang wrote:
> Hi Chao,
>
> On Fri, Dec 04, 2020 at 03:09:20PM +0800, Chao Yu wrote:
>> On 2020/12/4 8:31, Gao Xiang wrote:
>>> could make more sense), could you leave some CR numbers about these
>>> algorithms on typical datasets (enwik9, silisia.tar or else.) with 16k
>>> cluster size?
>>
>> Just from a quick test with enwik9 on vm:
>>
>> Original blocks: 244382
>>
>> lz4 lz4hc-9
>> compressed blocks 170647 163270
>> compress ratio 69.8% 66.8%
>> speed 16.4207 s, 60.9 MB/s 26.7299 s, 37.4 MB/s
>>
>> compress ratio = after / before
>
> Thanks for the confirmation. it'd be better to add this to commit message
> if needed when adding a new algorithm to show the benefits.
Sure, will add this.
>
> About the speed, I think which is also limited to storage device and other
> conditions (I mean the CPU loading during the writeback might be different
> between lz4 and lz4hc-9 due to many other bounds, e.g. UFS 3.0 seq
> write is somewhat higher vs VM. lz4 may have higher bandwidth on high
Yeah, I guess my VM have been limited on its storage bandwidth, and its back-end
could be low-end rotating disk...
> level devices since it seems some IO bound here... I guess but not sure,
> since pure in-memory lz4 is fast according to lzbench / lz4 homepage.)
>
> Anyway, it's up to f2fs folks if it's useful :) (the CR number is what
> I expect though... I'm a bit of afraid the CPU runtime loading.)
I just have a glance at CPU usage numbers (my VM has 16 cores):
lz4hc takes 11% in first half and downgrade to 6% at second half.
lz4 takes 6% in whole process.
But that's not accruate...
Thanks,
> Thanks for your time!
>
> Thanks,
> Gao Xiang
>
>>
>> Thanks,
>>
>
> .
>
Powered by blists - more mailing lists