lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 4 Dec 2020 08:31:19 +0800
From:   Gao Xiang <hsiangkao@...hat.com>
To:     Chao Yu <yuchao0@...wei.com>
Cc:     Eric Biggers <ebiggers@...nel.org>, jaegeuk@...nel.org,
        linux-kernel@...r.kernel.org,
        linux-f2fs-devel@...ts.sourceforge.net
Subject: Re: [f2fs-dev] [PATCH v6] f2fs: compress: support compress level

Hi Chao,

On Thu, Dec 03, 2020 at 11:32:34AM -0800, Eric Biggers wrote:

...

> 
> What is the use case for storing the compression level on-disk?
> 
> Keep in mind that compression levels are an implementation detail; the exact
> compressed data that is produced by a particular algorithm at a particular
> compression level is *not* a stable interface.  It can change when the
> compressor is updated, as long as the output continues to be compatible with the
> decompressor.
> 
> So does compression level really belong in the on-disk format?
> 

Curious about this, since f2fs compression uses 16k f2fs compress cluster
by default (doesn't do sub-block compression by design as what btrfs did),
so is there significant CR difference between lz4 and lz4hc on 16k
configuration (I guess using zstd or lz4hc for 128k cluster like btrfs
could make more sense), could you leave some CR numbers about these
algorithms on typical datasets (enwik9, silisia.tar or else.) with 16k
cluster size?

As you may noticed, lz4hc is much slower than lz4, so if it's used online,
it's a good way to keep all CPUs busy (under writeback) with unprivileged
users. I'm not sure if it does matter. (Ok, it'll give users more options
at least, yet I'm not sure end users are quite understand what these
algorithms really mean, I guess it spends more CPU time but without much
more storage saving by the default 16k configuration.)

from https://github.com/lz4/lz4    Core i7-9700K CPU @ 4.9GHz
Silesia Corpus

Compressor              Ratio   Compression     Decompression
memcpy                  1.000   13700 MB/s      13700 MB/s
Zstandard 1.4.0 -1      2.883   515 MB/s	1380 MB/s
LZ4 HC -9 (v1.9.0)      2.721   41 MB/s         4900 MB/s

Also a minor thing is lzo-rle, initially it was only used for in-memory
anonymous pages and it won't be kept on-disk so that's fine. I'm not sure
if lzo original author want to support it or not. It'd be better to get
some opinion before keeping it on-disk.

Thanks,
Gao Xiang

> - Eric
> 
> 
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@...ts.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ