[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1002160047340.17122@chino.kir.corp.google.com>
Date: Tue, 16 Feb 2010 00:49:14 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Nick Piggin <npiggin@...e.de>
cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Lubos Lunak <l.lunak@...e.cz>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [patch 1/7 -mm] oom: filter tasks not sharing the same cpuset
On Tue, 16 Feb 2010, Nick Piggin wrote:
> Yes we do need to explain the downside of the patch. It is a
> heuristic and we can't call either approach perfect.
>
> The fact is that even if 2 tasks are on completely disjoint
> memory policies and never _allocate_ from one another's nodes,
> you can still have one task pinning memory of the other task's
> node.
>
> Most shared and userspace-pinnable resources (pagecache, vfs
> caches and fds files sockes etc) are allocated by first-touch
> basically.
>
> I don't see much usage of cpusets and oom killer first hand in
> my experience, so I am happy to defer to others when it comes
> to heuristics. Just so long as we are all aware of the full
> story :)
>
Unless you can present a heuristic that will determine how much memory
usage a given task has allocated on nodes in current's zonelist, we must
exclude tasks from cpusets with a disjoint set of nodes, otherwise we
cannot determine the optimal task to kill. There's a strong possibility
that killing a task on a disjoint set of mems will never free memory for
current, making it a needless kill. That's a much more serious
consequence than not having the patch, in my opinion, than rather simply
killing current.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists