lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140708220519.GB29639@cmpxchg.org>
Date:	Tue, 8 Jul 2014 18:05:19 -0400
From:	Johannes Weiner <hannes@...xchg.org>
To:	Vladimir Davydov <vdavydov@...allels.com>
Cc:	akpm@...ux-foundation.org, mhocko@...e.cz, cl@...ux.com,
	glommer@...il.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -mm 0/8] memcg: reparent kmem on css offline

On Mon, Jul 07, 2014 at 07:40:08PM +0400, Vladimir Davydov wrote:
> On Mon, Jul 07, 2014 at 10:25:06AM -0400, Johannes Weiner wrote:
> > You could then reap dead slab caches as part of the regular per-memcg
> > slab scanning in reclaim, without having to resort to auxiliary lists,
> > vmpressure events etc.
> 
> Do you mean adding a per memcg shrinker that will call kmem_cache_shrink
> for all memcg caches on memcg/global pressure?
> 
> Actually I recently made dead caches self-destructive at the cost of
> slowing down kfrees to dead caches (see
> https://www.lwn.net/Articles/602330/, it's already in the mmotm tree) so
> no dead cache reaping is necessary. Do you think if we need it now?
>
> > I think it would save us a lot of code and complexity.  You want
> > per-memcg slab scanning *anyway*, all we'd have to change in the
> > existing code would be to pin the css until the LRUs and kmem caches
> > are truly empty, and switch mem_cgroup_iter() to css_tryget().
> > 
> > Would this make sense to you?
> 
> Hmm, interesting. Thank you for such a thorough explanation.
> 
> One question. Do we still need to free mem_cgroup->kmemcg_id on css
> offline so that it can be reused by new kmem-active cgroups (currently
> we don't)?
> 
> If we won't free it the root_cache->memcg_params->memcg_arrays may
> become really huge due to lots of dead css holding the id.

We only need the O(1) access of the array for allocation - not frees
and reclaim, right?

So with your self-destruct code, can we prune caches of dead css and
then just remove them from the array?  Or move them from the array to
a per-memcg linked list that can be scanned on memcg memory pressure?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ