[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090705095520.GA31587@localhost>
Date: Sun, 5 Jul 2009 17:55:21 +0800
From: Wu Fengguang <fengguang.wu@...il.com>
To: Rik van Riel <riel@...hat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
David Woodhouse <dwmw2@...radead.org>,
David Howells <dhowells@...hat.com>,
Minchan Kim <minchan.kim@...il.com>,
Mel Gorman <mel@....ul.ie>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Christoph Lameter <cl@...ux-foundation.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"tytso@....edu" <tytso@....edu>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"elladan@...imo.com" <elladan@...imo.com>,
"npiggin@...e.de" <npiggin@...e.de>,
"Barnes, Jesse" <jesse.barnes@...el.com>
Subject: Re: Found the commit that causes the OOMs
On Tue, Jun 30, 2009 at 10:57:02PM -0400, Rik van Riel wrote:
> KOSAKI Motohiro wrote:
>
>>> [ 1522.019259] Active_anon:11 active_file:6 inactive_anon:0
>>> [ 1522.019260] inactive_file:0 unevictable:0 dirty:0 writeback:0 unstable:0
>>> [ 1522.019261] free:1985 slab:44399 mapped:132 pagetables:61830 bounce:0
>>> [ 1522.019262] isolate:69817
>>
>> OK. thanks.
>> I plan to submit this patch after small more tests. it is useful for OOM analysis.
>
> It is also useful for throttling page reclaim.
>
> If more than half of the inactive pages in a zone are
> isolated, we are probably beyond the point where adding
> additional reclaim processes will do more harm than good.
Maybe we can try limiting the isolation phase of direct reclaims to
one per CPU?
mutex_lock(per_cpu_lock);
isolate_pages();
shrink_page_list();
put_back_pages();
mutex_unlock(per_cpu_lock);
This way the isolated pages as well as major parts of direct reclaims
will be bounded by CPU numbers. The added overheads should be trivial
comparing to the reclaim costs.
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists