lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170127180305.GB4332@esperanza>
Date:   Fri, 27 Jan 2017 21:03:05 +0300
From:   Vladimir Davydov <vdavydov@...antool.org>
To:     Tejun Heo <tj@...nel.org>
Cc:     cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
        iamjoonsoo.kim@....com, akpm@...ux-foundation.org, jsvana@...com,
        hannes@...xchg.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, cgroups@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH 03/10] slab: remove synchronous rcu_barrier() call in
 memcg cache release path

On Tue, Jan 17, 2017 at 03:54:04PM -0800, Tejun Heo wrote:
> With kmem cgroup support enabled, kmem_caches can be created and
> destroyed frequently and a great number of near empty kmem_caches can
> accumulate if there are a lot of transient cgroups and the system is
> not under memory pressure.  When memory reclaim starts under such
> conditions, it can lead to consecutive deactivation and destruction of
> many kmem_caches, easily hundreds of thousands on moderately large
> systems, exposing scalability issues in the current slab management
> code.  This is one of the patches to address the issue.
> 
> SLAB_DESTORY_BY_RCU caches need to flush all RCU operations before
> destruction because slab pages are freed through RCU and they need to
> be able to dereference the associated kmem_cache.  Currently, it's
> done synchronously with rcu_barrier().  As rcu_barrier() is expensive
> time-wise, slab implements a batching mechanism so that rcu_barrier()
> can be done for multiple caches at the same time.
> 
> Unfortunately, the rcu_barrier() is in synchronous path which is
> called while holding cgroup_mutex and the batching is too limited to
> be actually helpful.
> 
> This patch updates the cache release path so that the batching is
> asynchronous and global.  All SLAB_DESTORY_BY_RCU caches are queued
> globally and a work item consumes the list.  The work item calls
> rcu_barrier() only once for all caches that are currently queued.
> 
> * release_caches() is removed and shutdown_cache() now either directly
>   release the cache or schedules a RCU callback to do that.  This
>   makes the cache inaccessible once shutdown_cache() is called and
>   makes it impossible for shutdown_memcg_caches() to do memcg-specific
>   cleanups afterwards.  Move memcg-specific part into a helper,
>   unlink_memcg_cache(), and make shutdown_cache() call it directly.
> 
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Reported-by: Jay Vana <jsvana@...com>
> Cc: Vladimir Davydov <vdavydov.dev@...il.com>
> Cc: Christoph Lameter <cl@...ux.com>
> Cc: Pekka Enberg <penberg@...nel.org>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>

Acked-by: Vladimir Davydov <vdavydov@...antool.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ