[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50AC24F5.9090303@linux.vnet.ibm.com>
Date: Tue, 20 Nov 2012 16:48:53 -0800
From: Dave Hansen <dave@...ux.vnet.ibm.com>
To: linux-mm@...ck.org, Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [3.7-rc6] capture_free_page() frees page without accounting for
them??
I'm really evil, so I changed the loop in compact_capture_page() to
basically steal the highest-order page it can. This shouldn't _break_
anything, but it does ensure that we'll be splitting pages that we find
more often and recreating this *MUCH* faster:
- for (order = cc->order; order < MAX_ORDER; order++) {
+ for (order = MAX_ORDER - 1; order >= cc->order;order--)
I also augmented the area in capture_free_page() that I expect to be
leaking:
if (alloc_order != order) {
static int leaked_pages = 0;
leaked_pages += 1<<order;
leaked_pages -= 1<<alloc_order;
printk("%s() alloc_order(%d) != order(%d) leaked %d\n",
__func__, alloc_order, order,
leaked_pages);
expand(zone, page, alloc_order, order,
&zone->free_area[order], migratetype);
}
I add up all the fields in buddyinfo to figure out how much _should_ be
in the allocator and then compare it to MemFree to get a guess at how
much is leaked. That number correlates _really_ well with the
"leaked_pages" variable above. That pretty much seals it for me.
I'll run a stress test overnight to see if it pops up again. The patch
I'm running is attached. I'll send a properly changelogged one tomorrow
if it works.
View attachment "leak-fix-20121120-1.patch" of type "text/x-patch" (667 bytes)
Powered by blists - more mailing lists