[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4EF93018.5020807@cn.fujitsu.com>
Date: Tue, 27 Dec 2011 10:40:24 +0800
From: Li Zefan <lizf@...fujitsu.com>
To: Mandeep Singh Baines <msb@...omium.org>
CC: Tejun Heo <tj@...nel.org>,
Frederic Weisbecker <fweisbec@...il.com>,
linux-kernel@...r.kernel.org,
containers@...ts.linux-foundation.org, cgroups@...r.kernel.org,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Oleg Nesterov <oleg@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Paul Menage <paul@...lmenage.org>
Subject: Re: [PATCH 1/2] cgroup: replace tasklist_lock with rcu_read_lock
Mandeep Singh Baines wrote:
> Since cgroup_attach_proc is protected by a threadgroup_lock, we
> can replace the tasklist_lock in cgroup_attach_proc with an
> rcu_read_lock.
> To keep the complexity of the double-check locking
> in one place, I also moved the thread_group_leader check up into
> attach_task_by_pid. This allows us to use a goto instead of
> returning -EAGAIN.
Please split it into 2 patches, so each patch does one and only
one thing.
>
> While at it, also converted a couple of returns to gotos.
>
> Changes in V3:
> * https://lkml.org/lkml/2011/12/22/419 (Frederic Weisbecker)
> * Add an rcu_read_lock to protect against exit
> Changes in V2:
> * https://lkml.org/lkml/2011/12/22/86 (Tejun Heo)
> * Use a goto instead of returning -EAGAIN
>
> Suggested-by: Frederic Weisbecker <fweisbec@...il.com>
> Signed-off-by: Mandeep Singh Baines <msb@...omium.org>
> Cc: Tejun Heo <tj@...nel.org>
> Cc: Li Zefan <lizf@...fujitsu.com>
> Cc: containers@...ts.linux-foundation.org
> Cc: cgroups@...r.kernel.org
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> Cc: Oleg Nesterov <oleg@...hat.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Paul Menage <paul@...lmenage.org>
> ---
> kernel/cgroup.c | 74 +++++++++++++++++++-----------------------------------
> 1 files changed, 26 insertions(+), 48 deletions(-)
>
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index 1042b3c..6ee1438 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -2102,21 +2102,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> if (retval)
> goto out_free_group_list;
>
> - /* prevent changes to the threadgroup list while we take a snapshot. */
> - read_lock(&tasklist_lock);
> - if (!thread_group_leader(leader)) {
> - /*
> - * a race with de_thread from another thread's exec() may strip
> - * us of our leadership, making while_each_thread unsafe to use
> - * on this task. if this happens, there is no choice but to
> - * throw this task away and try again (from cgroup_procs_write);
> - * this is "double-double-toil-and-trouble-check locking".
> - */
> - read_unlock(&tasklist_lock);
> - retval = -EAGAIN;
> - goto out_free_group_list;
> - }
> -
> + rcu_read_lock();
> tsk = leader;
> i = 0;
> do {
> @@ -2145,7 +2131,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> group_size = i;
> tset.tc_array = group;
> tset.tc_array_len = group_size;
> - read_unlock(&tasklist_lock);
> + rcu_read_unlock();
>
> /* methods shouldn't be called if no task is actually migrating */
> retval = 0;
> @@ -2242,22 +2228,14 @@ static int attach_task_by_pid(struct cgroup *cgrp, u64 pid, bool threadgroup)
> if (!cgroup_lock_live_group(cgrp))
> return -ENODEV;
>
> +retry_find_task:
> if (pid) {
> rcu_read_lock();
> tsk = find_task_by_vpid(pid);
> if (!tsk) {
> rcu_read_unlock();
> - cgroup_unlock();
> - return -ESRCH;
> - }
> - if (threadgroup) {
> - /*
> - * RCU protects this access, since tsk was found in the
> - * tid map. a race with de_thread may cause group_leader
> - * to stop being the leader, but cgroup_attach_proc will
> - * detect it later.
> - */
> - tsk = tsk->group_leader;
> + ret= -ESRCH;
ret = -ESRCH;
> + goto out_unlock_cgroup;
> }
> /*
> * even if we're attaching all tasks in the thread group, we
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists