lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140421121840.GA11622@cmpxchg.org>
Date:	Mon, 21 Apr 2014 08:18:40 -0400
From:	Johannes Weiner <hannes@...xchg.org>
To:	Vladimir Davydov <vdavydov@...allels.com>
Cc:	Michal Hocko <mhocko@...e.cz>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Pekka Enberg <penberg@...nel.org>,
	Glauber Costa <glommer@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	devel@...nvz.org
Subject: Re: [RFC] how should we deal with dead memcgs' kmem caches?

On Sun, Apr 20, 2014 at 02:39:31PM +0400, Vladimir Davydov wrote:
> * Way #2 - reap caches periodically or on vmpressure *
> 
> We can remove the async work scheduling from kmem_cache_free completely,
> and instead walk over all dead kmem caches either periodically or on
> vmpressure to shrink and destroy those of them that become empty.
> 
> That is what I had in mind when submitting the patch set titled "kmemcg:
> simplify work-flow":
> 	https://lkml.org/lkml/2014/4/18/42
> 
> Pros: easy to implement
> Cons: instead of being destroyed asap, dead caches will hang around
> until some point in time or, even worse, memory pressure condition.

This would continue to pin css after cgroup destruction indefinitely,
or at least for an arbitrary amount of time.  To reduce the waste from
such pinning, we currently have to tear down other parts of the memcg
optimistically from css_offline(), which is called before the last
reference disappears and out of hierarchy order, making the teardown
unnecessarily complicated and error prone.

So I think "easy to implement" is misleading.  What we really care
about is "easy to maintain", and this basically excludes any async
schemes.

As far as synchronous cache teardown goes, I think everything that
introduces object accounting into the slab hotpaths will also be a
tough sell.

Personally, I would prefer the cache merging, where remaining child
slab pages are moved to the parent's cache on cgroup destruction.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ