[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100915033059.GA12542@localhost>
Date: Wed, 15 Sep 2010 11:31:00 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: "Li, Shaohua" <shaohua.li@...el.com>
Cc: Neil Brown <neilb@...e.de>, Rik van Riel <riel@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: Deadlock possibly caused by too_many_isolated.
On Wed, Sep 15, 2010 at 11:18:32AM +0800, Li, Shaohua wrote:
> > + if (!(sc->gfp_mask & __GFP_WAIT))
> > + return 0;
> > +
> it appears __GFP_WAIT allocation doesn't go to direct reclaim.
Good point! So we are returning to its very first version ;)
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1135,6 +1135,7 @@ static int too_many_isolated(struct zone *zone, int file,
struct scan_control *sc)
{
unsigned long inactive, isolated;
+ int ratio;
if (current_is_kswapd())
return 0;
@@ -1150,7 +1151,9 @@ static int too_many_isolated(struct zone *zone, int file,
isolated = zone_page_state(zone, NR_ISOLATED_ANON);
}
- return isolated > inactive;
+ ratio = sc->gfp_mask & (__GFP_IO | __GFP_FS) ? 1 : 8;
+
+ return isolated > inactive * ratio;
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists