[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1108201331400.3008@localhost6.localdomain6>
Date: Sat, 20 Aug 2011 13:32:53 +0300 (EEST)
From: Pekka Enberg <penberg@...nel.org>
To: Christoph Lameter <cl@...ux.com>
cc: linux-kernel@...r.kernel.org, rientjes@...gle.com
Subject: Re: [slub p4 1/7] slub: free slabs without holding locks (V2)
On Tue, 9 Aug 2011, Christoph Lameter wrote:
> There are two situations in which slub holds a lock while releasing
> pages:
>
> A. During kmem_cache_shrink()
> B. During kmem_cache_close()
>
> For A build a list while holding the lock and then release the pages
> later. In case of B we are the last remaining user of the slab so
> there is no need to take the listlock.
>
> After this patch all calls to the page allocator to free pages are
> done without holding any spinlocks. kmem_cache_destroy() will still
> hold the slub_lock semaphore.
>
> V1->V2. Remove kfree. Avoid locking in free_partial.
>
> Signed-off-by: Christoph Lameter <cl@...ux.com>
>
> ---
> mm/slub.c | 26 +++++++++++++-------------
> 1 file changed, 13 insertions(+), 13 deletions(-)
>
> Index: linux-2.6/mm/slub.c
> ===================================================================
> --- linux-2.6.orig/mm/slub.c 2011-08-09 13:01:59.071582163 -0500
> +++ linux-2.6/mm/slub.c 2011-08-09 13:05:00.051582012 -0500
> @@ -2970,13 +2970,13 @@ static void list_slab_objects(struct kme
>
> /*
> * Attempt to free all partial slabs on a node.
> + * This is called from kmem_cache_close(). We must be the last thread
> + * using the cache and therefore we do not need to lock anymore.
> */
> static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
> {
Is it possible to somehow verify that we're the last thread using the
cache when SLUB debugging is enabled? It'd be useful for tracking down
callers that violate this assumption.
Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists