lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 28 Feb 2013 14:00:36 -0800 (PST)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	minchan@...nel.org, sjenning@...ux.vnet.ibm.com,
	Nitin Gupta <nitingupta910@...il.com>
Cc:	Konrad Wilk <konrad.wilk@...cle.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, Bob Liu <lliubbo@...il.com>,
	Luigi Semenzato <semenzato@...gle.com>,
	Mel Gorman <mgorman@...e.de>
Subject: RE: zsmalloc limitations and related topics

> From: Dan Magenheimer
> Subject: zsmalloc limitations and related topics
> 
> WORKLOAD ANALYSIS
>   :
> 1) The average page compressed by almost a factor of six
>    (mean zsize == 694, stddev == 474)
> 2) Almost eleven percent of the pages were zero pages.  A
>    zero page compresses to 28 bytes.
> 3) On average, 77% of the bytes (3156) in the pages-to-be-
>    compressed contained a byte-value of zero.
> 4) Despite the above, mean density of zsmalloc was measured at
>    3.2 zpages/pageframe, presumably losing nearly half of
>    available space to fragmentation.
> 
> I have no clue if these measurements are representative
> of a wide range of workloads over the lifetime of a booted
> machine, but I am suspicious that they are not.  For example,
> the lzo1x compression algorithm claims to compress data by
> about a factor of two.

I realized that with a small hack in zswap, I could simulate the
effect on zsmalloc of a workload with very different zsize
distribution, one with a much higher mean, by simply doubling
(and tripling) the zsize passed to zs_malloc.  The results:

Unchanged: mean=694 stddev=474 -> mean density = 3.2
Doubled:   mean=1340 stddev=842 -> mean density = 1.9
Tripled:   mean=1636 stddev=1031 -> mean density = 1.6

Note that even tripled, the mean of the simulated
distribution is still much lower than PAGE_SIZE/2,
which is roughly the published expected compression for
lzo1x.  So one would still expect a mean density greater
than two but, apparently, one-third of available space is
lost to fragmentation.

Without a "representative" workload, I still have no clue
as to whether this simulated distribution is relevant,
but it is interesting to note that, for a workload with
lower mean compressibility, zsmalloc's reputation as
"high density" may be undeserved.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ