[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0607281422370.21238@schroedinger.engr.sgi.com>
Date: Fri, 28 Jul 2006 14:26:16 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Ravikiran G Thirumalai <kiran@...lex86.org>
cc: Thomas Gleixner <tglx@...utronix.de>,
Pekka Enberg <penberg@...helsinki.fi>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Arjan van de Ven <arjan@...radead.org>,
alokk@...softinc.com
Subject: Re: [BUG] Lockdep recursive locking in kmem_cache_free
On Fri, 28 Jul 2006, Ravikiran G Thirumalai wrote:
> Why should there be any problem taking the remote l3 lock? If the remote
> node does not have cpu that does not mean we cannot take a lock from the
> local node!!!
>
> I think current git does not teach lockdep to ignore recursion for
> array_cache->lock when the array_cache->lock are from different cases. As
> Arjan pointed out, I can see that l3->list_lock is special cased, but I
> cannot find where array_cache->lock is taken care of.
Ok.
> Again, if this is indeed a problem (recursion) machine should not boot even,
> when compiled without lockdep, tglx, can you please verify this?
We seem to be fine on that level.
I would still like to see someone thinking through this a bit more.
Allocations via page_alloc_node() may be redirected by cpusets and
because nodes are low on memory. This means that we get memory on a
different node than we requested. How does that impact the alien lock
situation? In particular what happens if the off slab allocation for
the management object was on a different node from the slab data?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists