[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20101019113316.A1CF.A69D9226@jp.fujitsu.com>
Date: Tue, 19 Oct 2010 11:54:08 +0900 (JST)
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: Minchan Kim <minchan.kim@...il.com>
Cc: kosaki.motohiro@...fujitsu.com,
Andrew Morton <akpm@...ux-foundation.org>,
Neil Brown <neilb@...e.de>,
Wu Fengguang <fengguang.wu@...el.com>,
Rik van Riel <riel@...hat.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"Li, Shaohua" <shaohua.li@...el.com>
Subject: Re: Deadlock possibly caused by too_many_isolated.
> >> > Can you please elaborate your intention? Do you think Wu's approach is wrong?
> >>
> >> No. I think Wu's patch may work well. But I agree Andrew.
> >> Couldn't we remove the too_many_isolated logic? If it is, we can solve
> >> the problem simply.
> >> But If we remove the logic, we will meet long time ago problem, again.
> >> So my patch's intention is to prevent OOM and deadlock problem with
> >> simple patch without adding new heuristic in too_many_isolated.
> >
> > But your patch is much false positive/negative chance because isolated pages timing
> > and too_many_isolated_zone() call site are in far distance place.
>
> Yes.
> How about the returning *did_some_progress can imply too_many_isolated
> fail by using MSB or new variable?
> Then, page_allocator can check it whether it causes read reclaim fail
> or parallel reclaim.
> The point is let's throttle without holding FS/IO lock.
Wu's version sleep in shrink_inactive_list(). your version sleep in __alloc_pages_slowpath()
by wait_iff_congested(). both don't release lock, I think.
But, if alloc_pages() return fail if GFP_NOIO, we introduce another issue.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists