lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Mar 2013 14:42:08 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	Konrad Wilk <konrad.wilk@...cle.com>,
	Minchan Kim <minchan@...nel.org>, Bob Liu <bob.liu@...cle.com>,
	Robert Jennings <rcj@...ux.vnet.ibm.com>,
	Nitin Gupta <ngupta@...are.org>,
	Wanpeng Li <liwanp@...ux.vnet.ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: zsmalloc/lzo compressibility vs entropy

This might be obvious to those of you who are better
mathematicians than I, but I ran some experiments
to confirm the relationship between entropy and compressibility
and thought I should report the results to the list.

Using the LZO code in the kernel via zsmalloc and some
hacks in zswap, I measured the compression of pages
generated by get_random_bytes and then of pages
where half the page is generated by get_random_bytes()
and the other half-page is zero-filled.

For a fully random page, one would expect the number
of zeroes and ones generated to be equal (highest
entropy) and that proved true:  The mean number of
one-bits in the fully random page was 16384 (x86,
so PAGE_SIZE=4096 * 8 bits/byte) with a stddev of 93.
(sample size > 500000).  For this sample of pages,
zsize had a mean of 4116 and a stddev of 16.
So for fully random pages, LZO compression results
in "negative" compression... the size of the compressed
page is slightly larger than a page.

For a "half random page" -- a fully random page with
the first half of the page overwritten with zeros --
zsize mean is 2077 with a stddev of 6.  So a half-random
page compresses by about a factor of 2.  (Just to
be sure, I reran the experiment with the first half
of the page overwritten with ones instead of zeroes,
and the result was approximately the same.)

For extra credit, I ran a "quarter random page"...
zsize mean is 1052 with a stddev of 45.

For more extra credit, I tried a fully-random page
with every OTHER byte forced to zero, so half the
bytes are random and half are zero.  The result:
mean zsize is 3841 with a stddev of 33.  Then I
tried a fully-random page with every other PAIR of
bytes forced to zero.  The result: zsize mean is 4029
with a stddev of 67. (Worse!)

So LZO page compression works better when there are many
more zeroes than ones in a page (or vice-versa), but works
best when a long sequence of bits (bytes?) are the same.

All this still begs the question as to what the
page-entropy (and zsize distribution) will be over a
large set of pages and over a large set of workloads
AND across different classes of data (e.g. frontswap
pages vs cleancache pages), but at least we have
some theory to guide us.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ