[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1603281940300.31323@east.gentwo.org>
Date: Mon, 28 Mar 2016 19:50:36 -0500 (CDT)
From: Christoph Lameter <cl@...ux.com>
To: js1304@...il.com
cc: Andrew Morton <akpm@...ux-foundation.org>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH 01/11] mm/slab: hold a slab_mutex when calling
__kmem_cache_shrink()
On Mon, 28 Mar 2016, js1304@...il.com wrote:
> Major kmem_cache metadata in slab subsystem is synchronized with
> the slab_mutex. In SLAB, if some of them is changed, node's shared
> array cache would be freed and re-populated. If __kmem_cache_shrink()
> is called at the same time, it will call drain_array() with n->shared
> without holding node lock so problem can happen.
>
> We can fix this small theoretical race condition by holding node lock
> in drain_array(), but, holding a slab_mutex in kmem_cache_shrink()
> looks more appropriate solution because stable state would make things
> less error-prone and this is not performance critical path.
Ummm.. The mutex taking is added to common code. So this will also affect
SLUB. The patch needs to consider this. Do we want to force all
allocators to run shrinking only when holding the lock? SLUB does not
need to hold the mutex. And frankly the mutex is for reconfiguration of
metadata which is *not* occurring here. A shrink operation does not do
that. Can we figure out a slab specific way of handling synchronization
in the strange free/realloc cycle?
It seems that taking the node lock is the appropriate level of
synchrnonization since the concern is with the contents of a shared cache
at that level. There is no change of metadata which would require the
mutex.
Powered by blists - more mailing lists