[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0610190906490.7852@schroedinger.engr.sgi.com>
Date: Thu, 19 Oct 2006 09:16:29 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Paul Mackerras <paulus@...ba.org>
cc: Will Schmidt <will_schmidt@...t.ibm.com>, akpm@...l.org,
linuxppc-dev@...abs.org, linux-kernel@...r.kernel.org
Subject: Re: kernel BUG in __cache_alloc_node at linux-2.6.git/mm/slab.c:3177!
On Thu, 19 Oct 2006, Paul Mackerras wrote:
> Get cache descritor
Attempt to allocate the first descriptor for the first cache.
> __cache_alloc
Attempt to allocate from the caches of node 0 (which are empty on
bootstrap). We try to replenish the caches of node 0 which should have
succeeded. I guess that this failed due to no pages available on
node 0. This should not happen!
It worked before 2.6.19 because the slab allocator allowed the page
allocator to fallback to node 1. However, we then put pages from node 1
on the per node lists for node 0. This was fixed in 2.6.19 using
GFP_THISNODE.
> __cache_alloc_node 0
No we go to __cache_alloc_node because it knows how to get memory from
differnet nodes (we should not get here at all there should be memory on
node 0!)
> fallback_alloc
We failed another attempt to get memory from node 0. Now we are going down
the zonelist.
> __cache_alloc_node 0
First attempt on node 0 (the head of the fallback list) which again has no
pages available.
> __cache_alloc_node 1
Attempt to allocate from node 1 (second zone on the fallback list)
> kernel BUG in __cache_alloc_node at /home/paulus/kernel/powerpc/mm/slab.c:3185!
Node 1 has not been setup yet since we have not completed bootstrap so we
BUG out.
Would you please make memory available on the node that you bootstrap
the slab allocator on? numa_node_id() must point to a node that has memory
available.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists