lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140418132331.GA26283@cmpxchg.org>
Date:	Fri, 18 Apr 2014 09:23:31 -0400
From:	Johannes Weiner <hannes@...xchg.org>
To:	Vladimir Davydov <vdavydov@...allels.com>
Cc:	mhocko@...e.cz, akpm@...ux-foundation.org, glommer@...il.com,
	cl@...ux-foundation.org, penberg@...nel.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, devel@...nvz.org
Subject: Re: [PATCH RFC -mm v2 0/3] kmemcg: simplify work-flow (was
 "memcg-vs-slab cleanup")

Hi Vladimir,

On Fri, Apr 18, 2014 at 12:04:46PM +0400, Vladimir Davydov wrote:
> Hi Michal, Johannes,
> 
> This patch-set is a part of preparations for kmemcg re-parenting. It
> targets at simplifying kmemcg work-flows and synchronization.
> 
> First, it removes async per memcg cache destruction (see patches 1, 2).
> Now caches are only destroyed on memcg offline. That means the caches
> that are not empty on memcg offline will be leaked. However, they are
> already leaked, because memcg_cache_params::nr_pages normally never
> drops to 0 so the destruction work is never scheduled except
> kmem_cache_shrink is called explicitly. In the future I'm planning
> reaping such dead caches on vmpressure or periodically.

I like the synchronous handling on css destruction, but the periodical
reaping part still bothers me.  If there is absolutely 0 use for these
caches remaining, they shouldn't hang around until we encounter memory
pressure or a random time interval.

Would it be feasible to implement cache merging in both slub and slab,
so that upon css destruction the child's cache's remaining slabs could
be moved to the parent's cache?  If the parent doesn't have one, just
reparent the whole cache.

> Second, it substitutes per memcg slab_caches_mutex's with the global
> memcg_slab_mutex, which should be taken during the whole per memcg cache
> creation/destruction path before the slab_mutex (see patch 3). This
> greatly simplifies synchronization among various per memcg cache
> creation/destruction paths.

This sounds reasonable.  I'll go look at the code.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ