lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0C87F2F4-43E6-482E-9A44-0151DE312AD7@fb.com>
Date:   Tue, 29 Jan 2019 21:12:22 +0000
From:   Nick Terrell <terrelln@...com>
To:     David Sterba <dsterba@...e.cz>
CC:     Dennis Zhou <dennis@...nel.org>, David Sterba <dsterba@...e.com>,
        "Josef Bacik" <josef@...icpanda.com>, Chris Mason <clm@...com>,
        Omar Sandoval <osandov@...ndov.com>,
        Kernel Team <Kernel-team@...com>,
        "linux-btrfs@...r.kernel.org" <linux-btrfs@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 00/11] btrfs: add zstd compression level support



> On Jan 29, 2019, at 9:18 AM, David Sterba <dsterba@...e.cz> wrote:
> 
> On Mon, Jan 28, 2019 at 04:24:26PM -0500, Dennis Zhou wrote:
>> As mentioned above, a requirement that differs zstd from zlib is that
>> higher levels of compression require more memory. To manage this, each
>> compression level has its own queue of workspaces. A global LRU is used
>> to help with reclaim. To guarantee forward progress, a max level
>> workspace is preallocated and hidden from the LRU.
> 
> Here I'd like to bring up what was mentioned in previous iteration, the
> workspace sizes.
> 
> Level   Compression Memory
> 1       0.8 MB
> 2       1.0 MB
> 3       1.3 MB
> 4       0.9 MB
> 5       1.4 MB
> 6       1.5 MB
> 7       1.4 MB
> 8       1.8 MB
> 9       1.8 MB
> 10      1.8 MB
> 11      1.8 MB
> 12      1.8 MB
> 13      2.3 MB
> 14      2.6 MB
> 15      2.6 MB
> 
> and decompression needs memory of level 1. The sizes can be grouped
> together to say 3 sizes, I'm not sure that we'd really need 15 distinct
> workspaces. The reclaim mechanism helps, but I'd rather keep a smaller
> number of workspaces that covers average use.
> 
> Default level is 3, that's 1.3 MiB, that also covers level 1, 2 and 4.
> For 5 to 12 it's 1.8 and the rest is 2.6 MiB.
> 
>> btrfs filesystem 10 times and then read back after dropping the caches.
>> The btrfs filesystem was on an SSD.
>> 
>> Level   Ratio   Compression (MB/s)  Decompression (MB/s)
>> 1       2.658        438.47                910.51
>> 2       2.744        364.86                886.55
>> 3       2.801        336.33                828.41
>> 4       2.858        286.71                886.55
>> 5       2.916        212.77                556.84
>> 6       2.363        119.82                990.85
>> 7       3.000        154.06                849.30
>> 8       3.011        159.54                875.03
>> 9       3.025        100.51                940.15
>> 10      3.033        118.97                616.26
>> 11      3.036         94.19                802.11
>> 12      3.037         73.45                931.49
>> 13      3.041         55.17                835.26
>> 14      3.087         44.70                716.78
>> 15      3.126         37.30                878.84
> 
> From my casual user's perspective, I'd use the level 1 for speed, 7 for
> better ratio and 15 for the best compression. Anything else does not
> look good, though the results would vary based on the data set. I
> assume that the silesia corpus serves as a good approximation of the
> worst case average.
>
> The levels 7-14 strike particularly obvious pattern: same ratio but the
> speed gets worse with each level. Taking the default level into account,
> (my) recommended levels would be 1, 3, 7, 15.

Silesia is used because it is a standard corpus, and I'd call it about average,
but there is a lot of variance and extreme edge case data. The intermediate
strategies will change in effectiveness on different types of data. For example,
the lower levels are generally more effective on text, and you want slightly
higher levels for non-text data, because they can find shorter matches.

Upstream zstd also shifts around its levels, and the memory usage of each
level from time-to-time, and I am going to update zstd in the kernel in this
next year, since we are slowing down development. The shifts will be small
though.

It could make sense to map the levels into size classes, since that could
reduce memory spikes, at the cost of higher stead-state memory usage.
I'm not familiar with the machinery used in these patches, so I can't actually
say much. I would probably use levels 1, 3, 7 (after it is made monotonic),
12, and 15. You might skip 7, but leave 12.

> I went through the patches, looks mostly ok, I don't like the
> indirections but at the moment it's an implementation detail as I'd like
> to agree on the overall approach first.
> 
> We might need a few revisions or cleanup rounds to converge to an
> efficient solution, the advantage here is that it's all in-memory and
> without compatibility concerns once the level support for zstd is in and
> works.
> 
> For that reason, I'm not opposed to the current version of the patchset.
> Given the time in development schedule, it's really close to code
> freeze, but the functionality has a narrow scope so I'm tentatively
> counting with it for 5.1.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ