[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20081215105053.89263708.kamezawa.hiroyu@jp.fujitsu.com>
Date: Mon, 15 Dec 2008 10:50:53 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Li Zefan <lizf@...fujitsu.com>
Cc: Paul Menage <menage@...gle.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] sched: fix another race when reading /proc/sched_debug
On Mon, 15 Dec 2008 09:39:18 +0800
Li Zefan <lizf@...fujitsu.com> wrote:
> Paul Menage wrote:
> > I sent out some patches last week (search for hierarchy_mutex) that would
> > mean that you'd only need to take a subsystem-local lock to keep a cgroup
> > alive. People seemed to like them so I'll tweak them based on feedback and
> > send them on to Andrew.
> >
>
> Unfortunately, AFAICS the proposed hierarchy_mutex can't solve this bug. :(
>
Hmm ? how about this way if cgroup->dentry is problem ?
at creation:
- cpu_cgroup_populate() should record "tg" that "this cgroup has valid dentry"
at deletion
- css_tryget() will be useful to avoid the race.
thx,
-Kame
> > Paul
> >
> > On Dec 14, 2008 4:48 AM, "Peter Zijlstra" <a.p.zijlstra@...llo.nl> wrote:
> >
> > On Sun, 2008-12-14 at 10:54 +0800, Li Zefan wrote: > > i merged it up in
> > tip/master, could you pleas...
> > Can't we detect a dead task-group and skip those instead of adding this
> > global lock?
> >
> >> Signed-off-by: Li Zefan <lizf@...fujitsu.com> > --- > kernel/sched_fair.c
> > | 6 ++++++ > kerne...
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists