[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1154106859.6416.31.camel@laptopd505.fenrus.org>
Date: Fri, 28 Jul 2006 19:14:19 +0200
From: Arjan van de Ven <arjan@...radead.org>
To: Ravikiran G Thirumalai <kiran@...lex86.org>
Cc: Christoph Lameter <clameter@....com>,
Pekka Enberg <penberg@...helsinki.fi>, alokk@...softinc.com,
tglx@...utronix.de, LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [BUG] Lockdep recursive locking in kmem_cache_free
> cache_free_alien could get called, but there is no recursion here:
>
> 1. reap_alien tries dropping remote objects freed by local node (A) to the
> remote node (B) shared array cache (choosing a remote node as indicated by the
> node rotor), to do this, it takes the local alien cache lock (A), and calls
> __drain_alien_cache. The remote object comes from a slab cache X say.
>
> 2. __drain_alien_cache. takes the remote node l3 lock (B), transfers as many
> objects as shared array cache of the remote node can hold, and calls
> free_block to free remaining objects that could not be dropped in into the
> shared array cache of remote node (B). Now free_block is being called from
> (A) to free objects on (B).
>
> 3. free_block calls slab_destroy for the slab belonging to B. calls
> kmem_cache_free for the slab management, which calls __cache_free, and
> hence cache_free_alien(). Now since this is being called from A for a local
> object of B, the check in cache_free_alien fails, and cache_free_alien
> *does* get executed. Since slab management of a slab from B, local to B is
> freed from A, A tries to write to the local alien cache corresponding to B,
> which comes from a slab cache Y. There is a recursion if X and Y are the
> same caches. But that is not a possibility at all, as the off slab management
> for a slab cache cannot come from the same slab cache. So this looks like a
> false positive from lockdep.
actually lockdep doesn't see normal slabs and the slabs where off-slab
management comes from as the same lock, so it really shouldn't complain
about this specific case.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists