[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0607281344410.20754@schroedinger.engr.sgi.com>
Date: Fri, 28 Jul 2006 13:48:33 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Thomas Gleixner <tglx@...utronix.de>
cc: Pekka Enberg <penberg@...helsinki.fi>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Arjan van de Ven <arjan@...radead.org>,
alokk@...softinc.com, kiran@...lex86.org
Subject: Re: [BUG] Lockdep recursive locking in kmem_cache_free
On Fri, 28 Jul 2006, Thomas Gleixner wrote:
> On Fri, 2006-07-28 at 13:36 -0700, Christoph Lameter wrote:
> > On Fri, 28 Jul 2006, Thomas Gleixner wrote:
> >
> > > Let me know, if you need more info
> >
> > What type of NUMA system is this? How many nodes? Is memory exhausted on
> > some so that allocations are redirected? Are cpusets or memory policies
> > used to redirect allocations?
>
> Dual dual core opteron board, only one CPU brought up. This happens
> during bootup, so no special settings involved.
One cpu with two nodes?
> [ 0.000000] Bootmem setup node 0 0000000000000000-0000000080000000
> [ 0.000000] Bootmem setup node 1 0000000080000000-00000000fbff0000
Right two nodes. We may have a special case here of one cpu having to
manage remote memory. Alien cache freeing is likely screwed up in that
case because we cannot have the idea of one processor local to the node
doing the alien cache draining . We have to take the remote lock (no cpu
dedicate to that node).
Could you enable another processor and dedicate the other to the second
node?
Another idea would be to simply disable alien caches for that
configuration. I heard that Alok and Kiran have a patch that would do
this.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists