[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1306121343500.24902@chino.kir.corp.google.com>
Date: Wed, 12 Jun 2013 13:49:47 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Michal Hocko <mhocko@...e.cz>
cc: Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 2/2] memcg: do not sleep on OOM waitqueue with full charge
context
On Wed, 12 Jun 2013, Michal Hocko wrote:
> The patch is a big improvement with a minimum code overhead. Blocking
> any task which sits on top of an unpredictable amount of locks is just
> broken. So regardless how many users are affected we should merge it and
> backport to stable trees. The problem is there since ever. We seem to
> be surprisingly lucky to not hit this more often.
>
Right now it appears that that number of users is 0 and we're talking
about a problem that was reported in 3.2 that was released a year and a
half ago. The rules of inclusion in stable also prohibit such a change
from being backported, specifically "It must fix a real bug that bothers
people (not a, "This could be a problem..." type thing)".
We have deployed memcg on a very large number of machines and I can run a
query over all software watchdog timeouts that have occurred by
deadlocking on i_mutex during memcg oom. It returns 0 results.
> I am not quite sure I understand your reservation about the patch to be
> honest. Andrew still hasn't merged this one although 1/2 is in.
Perhaps he is as unconvinced? The patch adds 100 lines of code, including
fields to task_struct for memcg, for a problem that nobody can reproduce.
My question still stands: can anybody, even with an instrumented kernel to
make it more probable, reproduce the issue this is addressing?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists