[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52FECCE3.1010707@huawei.com>
Date: Sat, 15 Feb 2014 10:11:47 +0800
From: Li Zefan <lizefan@...wei.com>
To: Tejun Heo <tj@...nel.org>
CC: <cgroups@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 cgroup/for-3.15] cgroup: make cgroup_enable_task_cg_lists()
to grab siglock
On 2014/2/15 4:47, Tejun Heo wrote:
> Currently, there's nothing explicitly preventing
> cgroup_enable_task_cg_lists() from missing set PF_EXITING and race
> against cgroup_exit(), and, depending on the timing, cgroup_exit()
> seemingly may finish with the task still linked on css_set leading to
> list corruption because cgroup_enable_task_cg_lists() can end up
> linking it after list_empty(&tsk->cg_list) test in cgroup_exit().
>
> This can't really happen because exit_mm() grabs and release
> task_lock() between setting of PF_EXITING and cgroup_exit(), and
> cgroup_enable_task_cg_lists() synchronizes against task_lock too;
> however, this is fragile and more of a happy accident. Let's make the
> synchronization explicit by making cgroup_enable_task_cg_lists() grab
> siglock around PF_EXITING testing.
>
> This whole on-demand cg_list optimization is extremely fragile and has
> ample possibility to lead to bugs which can cause things like
> once-a-year oops during boot. I'm wondering whether the better
> approach would be just adding "cgroup_disable=all" handling which
> disables the whole cgroup rather than tempting fate with this dynamic
> optimization craziness.
>
> v2: Li pointed out that the race condition can't actually happen due
> to task_lock locking in exit_mm(). Updated the patch description
> accordingly and dropped -stable cc.
>
I realise exit_mm() is a no-op for threads... There're quite a few places
task_lock is used between exit_signal() and cgroup_exit(), but they're
all conditional, so I think your original changelog stands!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists