[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200710251306.39237.nickpiggin@yahoo.com.au>
Date: Thu, 25 Oct 2007 13:06:38 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Christoph Lameter <clameter@....com>
Cc: Alexey Dobriyan <adobriyan@...il.com>, Mel Gorman <mel@...net.ie>,
Pekka Enberg <penberg@...helsinki.fi>,
linux-kernel@...r.kernel.org, linux-mm@...r.kernel.org
Subject: Re: SLUB 0:1 SLAB (OOM during massive parallel kernel builds)
On Thursday 25 October 2007 12:43, Christoph Lameter wrote:
> On Thu, 25 Oct 2007, Nick Piggin wrote:
> > > Ummm... all unreclaimable is set! Are you mlocking the pages in memory?
> > > Or what causes this? All pages under writeback? What is the dirty ratio
> > > set to?
> >
> > Why is SLUB behaving differently, though.
>
> Nore sure. Are we really sure that this does not occur using SLAB?
>From the reports it seems pretty consistent. I guess it could well
be something that may occur with SLAB *if the conditions are a bit
different*...
> > Memory efficiency wouldn't be the reason, would it? I mean, SLUB
> > should be more efficient than SLAB, plus have less data lying around
> > in queues.
>
> SLAB may have data around in queues which (if the stars align the right
> way) may allow it to go longer without having to get a page from the page
> allocator.
But page allocs from slab isn't where the OOMs are occurring, so this
seems unlikely (also, the all_unreclaimable logic now should be pretty
strict, so you have to really run the machine out of memory (1GB of
swap gets fully used, then his DMA32 zone is scanned 8 times without
reclaiming a single page).
That said, parallel kernel compiling can really change a lot in memory
footprint depending on small variations in timing. So it might not be
anything to worry about.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists