[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110809154346.GA18800@tiehlicka.suse.cz>
Date: Tue, 9 Aug 2011 17:43:46 +0200
From: Michal Hocko <mhocko@...e.cz>
To: Johannes Weiner <jweiner@...hat.com>
Cc: linux-mm@...ck.org, Balbir Singh <bsingharora@...il.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2 v2] memcg: make oom_lock 0 and 1 based rather than
coutner
On Tue 09-08-11 17:37:32, Johannes Weiner wrote:
> On Tue, Aug 09, 2011 at 05:22:18PM +0200, Michal Hocko wrote:
> > On Tue 09-08-11 16:03:12, Johannes Weiner wrote:
> > > struct mem_cgroup *iter, *failed = NULL;
> > > bool cond = true;
> > >
> > > for_each_mem_cgroup_tree_cond(iter, mem, cond) {
> > > - bool locked = iter->oom_lock;
> > > -
> > > - iter->oom_lock = true;
> > > - if (lock_count == -1)
> > > - lock_count = iter->oom_lock;
> > > - else if (lock_count != locked) {
> > > + if (iter->oom_lock) {
> > > /*
> > > * this subtree of our hierarchy is already locked
> > > * so we cannot give a lock.
> > > */
> > > - lock_count = 0;
> > > failed = iter;
> > > cond = false;
> > > - }
> > > + } else
> > > + iter->oom_lock = true;
> > > }
> > >
> > > if (!failed)
> >
> > We can return here and get rid of done label.
>
> Ah, right you are. Here is an update.
Thanks!
>
> ---
> From 86b36904033e6c6a1af4716e9deef13ebd31e64c Mon Sep 17 00:00:00 2001
> From: Johannes Weiner <jweiner@...hat.com>
> Date: Tue, 9 Aug 2011 15:31:30 +0200
> Subject: [patch] memcg: fix hierarchical oom locking
>
> Commit "79dfdac memcg: make oom_lock 0 and 1 based rather than
> counter" tried to oom lock the hierarchy and roll back upon
> encountering an already locked memcg.
>
> The code is confused when it comes to detecting a locked memcg,
> though, so it would fail and rollback after locking one memcg and
> encountering an unlocked second one.
It is actually worse than that. The way how it is broken also allows to
lock a hierarchy which already contains locked subtree...
>
> The result is that oom-locking hierarchies fails unconditionally and
> that every oom killer invocation simply goes to sleep on the oom
> waitqueue forever. The tasks practically hang forever without anyone
> intervening, possibly holding locks that trip up unrelated tasks, too.
>
> Signed-off-by: Johannes Weiner <jweiner@...hat.com>
> Acked-by: Michal Hocko <mhocko@...e.cz>
> ---
> mm/memcontrol.c | 17 +++++------------
> 1 files changed, 5 insertions(+), 12 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index c6faa32..f39c8fb 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1843,29 +1843,23 @@ static int mem_cgroup_hierarchical_reclaim(struct mem_cgroup *root_mem,
> */
> static bool mem_cgroup_oom_lock(struct mem_cgroup *mem)
> {
> - int lock_count = -1;
> struct mem_cgroup *iter, *failed = NULL;
> bool cond = true;
>
> for_each_mem_cgroup_tree_cond(iter, mem, cond) {
> - bool locked = iter->oom_lock;
> -
> - iter->oom_lock = true;
> - if (lock_count == -1)
> - lock_count = iter->oom_lock;
> - else if (lock_count != locked) {
> + if (iter->oom_lock) {
> /*
> * this subtree of our hierarchy is already locked
> * so we cannot give a lock.
> */
> - lock_count = 0;
> failed = iter;
> cond = false;
> - }
> + } else
> + iter->oom_lock = true;
> }
>
> if (!failed)
> - goto done;
> + return true;
>
> /*
> * OK, we failed to lock the whole subtree so we have to clean up
> @@ -1879,8 +1873,7 @@ static bool mem_cgroup_oom_lock(struct mem_cgroup *mem)
> }
> iter->oom_lock = false;
> }
> -done:
> - return lock_count;
> + return false;
> }
>
> /*
> --
> 1.7.6
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists