lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 4 Mar 2013 10:29:32 -0800 (PST)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Ric Mason <ric.masonn@...il.com>
Cc:	minchan@...nel.org, sjenning@...ux.vnet.ibm.com,
	Nitin Gupta <nitingupta910@...il.com>,
	Konrad Wilk <konrad.wilk@...cle.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, Bob Liu <lliubbo@...il.com>,
	Luigi Semenzato <semenzato@...gle.com>,
	Mel Gorman <mgorman@...e.de>
Subject: RE: zsmalloc limitations and related topics

> From: Ric Mason [mailto:ric.masonn@...il.com]
> Subject: Re: zsmalloc limitations and related topics
> 
> On 02/28/2013 07:24 AM, Dan Magenheimer wrote:
> > Hi all --
> >
> > I've been doing some experimentation on zsmalloc in preparation
> > for my topic proposed for LSFMM13 and have run across some
> > perplexing limitations.  Those familiar with the intimate details
> > of zsmalloc might be well aware of these limitations, but they
> > aren't documented or immediately obvious, so I thought it would
> > be worthwhile to air them publicly.  I've also included some
> > measurements from the experimentation and some related thoughts.
> >
> > (Some of the terms here are unusual and may be used inconsistently
> > by different developers so a glossary of definitions of the terms
> > used here is appended.)
> >
> > ZSMALLOC LIMITATIONS
> >
> > Zsmalloc is used for two zprojects: zram and the out-of-tree
> > zswap.  Zsmalloc can achieve high density when "full".  But:
> >
> > 1) Zsmalloc has a worst-case density of 0.25 (one zpage per
> >     four pageframes).
> > 2) When not full and especially when nearly-empty _after_
> >     being full, density may fall below 1.0 as a result of
> >     fragmentation.
> 
> What's the meaning of nearly-empty _after_ being full?

Step 1:  Add a few (N) pages to zsmalloc.  It is "nearly empty".
Step 2:  Now add many more pages to zsmalloc until allocation
         limits are reached.  It is "full".
Step 3:  Now remove many pages from zsmalloc until there are
         N pages remaining.  It is now "nearly empty after
         being full".

Fragmentation characteristics are different comparing
after Step 1 and after Step 3 even though, in both cases,
zsmalloc contains N pages.
 
> > 3) Zsmalloc has a density of exactly 1.0 for any number of
> >     zpages with zsize >= 0.8.
> > 4) Zsmalloc contains several compile-time parameters;
> >     the best value of these parameters may be very workload
> >     dependent.
> >
> > If density == 1.0, that means we are paying the overhead of
> > compression+decompression for no space advantage.  If
> > density < 1.0, that means using zsmalloc is detrimental,
> > resulting in worse memory pressure than if it were not used.
> >
> > WORKLOAD ANALYSIS
> >
> > These limitations emphasize that the workload used to evaluate
> > zsmalloc is very important.  Benchmarks that measure data
> 
> Could you share your benchmark? In order that other guys can take
> advantage of it.

As Seth does, I just used "make" of a kernel.  I run it on
a full graphical installation of EL6.  In order to ensure there
is memory pressure, I limit physical memory to 1GB, and use
"make -j20".

> > throughput or CPU utilization are of questionable value because
> > it is the _content_ of the data that is particularly relevant
> > for compression.  Even more precisely, it is the "entropy"
> > of the data that is relevant, because the amount of
> > compressibility in the data is related to the entropy:
> > I.e. an entirely random pagefull of bits will compress poorly
> > and a highly-regular pagefull of bits will compress well.
> > Since the zprojects manage a large number of zpages, both
> > the mean and distribution of zsize of the workload should
> > be "representative".
> >
> > The workload most widely used to publish results for
> > the various zprojects is a kernel-compile using "make -jN"
> > where N is artificially increased to impose memory pressure.
> > By adding some debug code to zswap, I was able to analyze
> > this workload and found the following:
> >
> > 1) The average page compressed by almost a factor of six
> >     (mean zsize == 694, stddev == 474)
> 
> stddev is what?

Standard deviation.  See:
http://en.wikipedia.org/wiki/Standard_deviation 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ