[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1009070918030.14634@router.home>
Date: Tue, 7 Sep 2010 09:23:48 -0500 (CDT)
From: Christoph Lameter <cl@...ux.com>
To: Dave Chinner <david@...morbit.com>
cc: Wu Fengguang <fengguang.wu@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>,
Linux Kernel List <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Minchan Kim <minchan.kim@...il.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct
reclaim allocation fails
On Mon, 6 Sep 2010, Dave Chinner wrote:
> [ 596.628086] [<ffffffff81108a8c>] ? drain_all_pages+0x1c/0x20
> [ 596.628086] [<ffffffff81108fad>] ? __alloc_pages_nodemask+0x42d/0x700
> [ 596.628086] [<ffffffff8113d0f2>] ? kmem_getpages+0x62/0x160
> [ 596.628086] [<ffffffff8113dce6>] ? fallback_alloc+0x196/0x240
fallback_alloc() showing up here means that one page allocator call from
SLAB has already failed. SLAB then did an expensive search through all
object caches on all nodes to find some available object. There were no
objects in queues at all therefore SLAB called the page allocator again
(kmem_getpages()).
As soon as memory is available (on any node or any cpu, they are all
empty) SLAB will repopulate its queues(!).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists