[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100603120814.7242.A69D9226@jp.fujitsu.com>
Date: Thu, 3 Jun 2010 13:48:09 +0900 (JST)
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: David Rientjes <rientjes@...gle.com>
Cc: kosaki.motohiro@...fujitsu.com, Oleg Nesterov <oleg@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Nick Piggin <npiggin@...e.de>
Subject: Re: [PATCH] oom: remove PF_EXITING check completely
> On Wed, 2 Jun 2010, Oleg Nesterov wrote:
>
> > > Today, I've thought to make some bandaid patches for this issue. but
> > > yes, I've reached the same conclusion.
> > >
> > > If we think multithread and core dump situation, all fixes are just
> > > bandaid. We can't remove deadlock chance completely.
> > >
> > > The deadlock is certenaly worst result, then, minor PF_EXITING optimization
> > > doesn't have so much worth.
> >
> > Agreed! I was always wondering if it really helps in practice.
> >
>
> Nack, this certainly does help in practice, it prevents needlessly killing
> additional tasks when one is exiting and may free memory. It's much
> better to defer killing something temporarily if an eligible task (i.e.
> one that has a high probability of memory allocations on current's nodes
> or contributing to its memcg) is exiting.
>
> We depend on this check specifically for our use of cpusets, so please
> don't remove it.
Your claim violate our development process. Oleg pointed this check
doesn't only work well, but also can makes deadlock. So, We certinally
need anything fix. then, I'll remove this check completely at 2.6.35
timeframe.
But this doesn't mean we refuse you make better patch at all. I expect
we can merge very soon if you make such patch.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists