[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130109133746.GD13304@suse.de>
Date: Wed, 9 Jan 2013 13:37:46 +0000
From: Mel Gorman <mgorman@...e.de>
To: Eric Wong <normalperson@...t.net>
Cc: linux-mm@...ck.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, Rik van Riel <riel@...hat.com>,
Minchan Kim <minchan@...nel.org>,
Eric Dumazet <eric.dumazet@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: ppoll() stuck on POLLIN while TCP peer is sending
On Tue, Jan 08, 2013 at 11:23:25PM +0000, Eric Wong wrote:
> Mel Gorman <mgorman@...e.de> wrote:
> > Please try the following patch. However, even if it works the benefit of
> > capture may be so marginal that partially reverting it and simplifying
> > compaction.c is the better decision.
>
> I already got my VM stuck on this one. I had two twosleepy instances,
> 2774 was the one that got stuck (also confirmed by watching top).
>
page->pfmemalloc can be left set for captured pages so try this but as
capture is rarely used I'm strongly favouring a partial revert even if
this works for you. I haven't reproduced this using your workload yet
but I have found that high-order allocation stress tests for 3.8-rc2 are
completely screwed. 71% success rates at rest in 3.7 and 6% in 3.8-rc2 so
I have to chase that down too.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9d20c13..c242d21 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2180,8 +2180,10 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
current->flags &= ~PF_MEMALLOC;
/* If compaction captured a page, prep and use it */
- if (page && !prep_new_page(page, order, gfp_mask))
+ if (page && !prep_new_page(page, order, gfp_mask)) {
+ page->pfmemalloc = false;
goto got_page;
+ }
if (*did_some_progress != COMPACT_SKIPPED) {
/* Page migration frees to the PCP lists but we want merging */
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists