lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 27 Nov 2018 16:19:29 +0000
From:   Dave Rodgman <dave.rodgman@....com>
To:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC:     nd <nd@....com>,
        "herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
        "davem@...emloft.net" <davem@...emloft.net>,
        Matt Sealey <Matt.Sealey@....com>,
        "nitingupta910@...il.com" <nitingupta910@...il.com>,
        "rpurdie@...nedhand.com" <rpurdie@...nedhand.com>,
        "markus@...rhumer.com" <markus@...rhumer.com>,
        "minchan@...nel.org" <minchan@...nel.org>,
        "sergey.senozhatsky.work@...il.com" 
        <sergey.senozhatsky.work@...il.com>,
        "sonnyrao@...gle.com" <sonnyrao@...gle.com>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
Subject: [PATCH v2 0/7] lib/lzo: performance improvements

This patch series introduces performance improvements for lzo.

The previous version of this patchset is here:
https://lkml.org/lkml/2018/11/21/625

This version tidies up the ifdefs as per Christoph's comment (although
certainly more could be done, this is at least a bit more consistent
with normal kernel coding style).

On 23/11/2018 2:12 am, Sergey Senozhatsky wrote:

>> The graph below shows the weighted round-trip throughput of lzo, lz4 and
>> lzo-rle, for randomly generated 4k chunks of data with varying levels of
>> entropy. (To calculate weighted round-trip throughput, compression performance
>> is emphasised to reflect the fact that zram does around 2.25x more compression
>> than decompression.
> 
> Right. The number is data dependent. Not all swapped out pages can be
> compressed; compressed pages that end up being >= zs_huge_class_size() are
> considered incompressible and stored as it.
> 
> I'd say that on my setups around 50-60% of pages are incompressible.

So, just to give a bit more detail: the test setup was a Samsung
Chromebook Pro, cycling through 80 tabs in Chrome. With lzo-rle, only
5% of pages increased in size, and 90% of pages compress to 75% of
original size (or better). Mean compression ratio was 41%. Importantly
for lzo-rle, there are a lot of low-entropy pages where it can do well:
in total about 20% of the data is zeros forming part of a run of 4 or
more bytes.

As a quick summary of the impact of these patches on bigger chunks of
data, I've compared the performance of four different variants of lzo
on two large (~40 MB) files. The numbers show round-trip throughput
in MB/s:

Variant         | Low-entropy | High-entropy
Current lzo     |  242        | 157
Arm opts        |  290        | 159
RLE             |  876        | 151
Arm opts + RLE  | 1150        | 181

So both the Arm optimisations (8,16-byte copy & CTZ patches), and the
RLE implementation make a significant contribution to the overall
performance uplift.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ