lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 22 Jan 2013 15:58:53 -0800 (PST)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Dave Chinner <david@...morbit.com>
Cc:	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	Konrad Wilk <konrad.wilk@...cle.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: High slab usage testing with zcache/zswap (Was: [PATCH 7/8] zswap:
 add to mm/)

> From: Dave Chinner [mailto:david@...morbit.com]
> Sent: Thursday, January 03, 2013 12:34 AM
> Subject: Re: [PATCH 7/8] zswap: add to mm/
> 
> > > On 01/02/2013 09:26 AM, Dan Magenheimer wrote:
> > > > However if one compares the total percentage
> > > > of RAM used for zpages by zswap vs the total percentage of RAM
> > > > used by slab, I suspect that the zswap number will dominate,
> > > > perhaps because zswap is storing primarily data and slab is
> > > > storing primarily metadata?
> > >
> > > That's *obviously* 100% dependent on how you configure zswap.  But, that
> > > said, most of _my_ systems tend to sit with about 5% of memory in
> > > reclaimable slab
> >
> > The 5% "sitting" number for slab is somewhat interesting, but
> > IMHO irrelevant here. The really interesting value is what percent
> > is used by slab when the system is under high memory pressure; I'd
> > imagine that number would be much smaller.  True?
> 
> Not at all. The amount of slab memory used is wholly dependent on
> workload. I have plenty of workloads with severe memory pressure
> that I test with that sit at a steady state of >80% of ram in slab
> caches. These workloads are filesytem metadata intensive rather than
> data intensive, that's exactly the right cache balance for the
> system to have....

Hey Dave --

I'd like to do some zcache policy testing where the severe
memory pressure is a result of something like the above
where >80% of ram is in slab caches.  Any thoughts on how
to do this or easily simulate it on a very simple hardware
system (e.g. PC with one SATA disk)?  Or is a "big data"
configuration required?

Thanks for any advice!
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ