[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50B743A1.4040405@parallels.com>
Date: Thu, 29 Nov 2012 15:14:41 +0400
From: Glauber Costa <glommer@...allels.com>
To: Tejun Heo <tj@...nel.org>
CC: <lizefan@...wei.com>, <paul@...lmenage.org>,
<containers@...ts.linux-foundation.org>, <cgroups@...r.kernel.org>,
<peterz@...radead.org>, <mhocko@...e.cz>, <bsingharora@...il.com>,
<hannes@...xchg.org>, <kamezawa.hiroyu@...fujitsu.com>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCHSET cgroup/for-3.8] cpuset: decouple cpuset locking from
cgroup core
On 11/29/2012 01:34 AM, Tejun Heo wrote:
> This patchset decouples cpuset locking from cgroup_mutex. After the
> patchset, cpuset uses cpuset-specific cpuset_mutex instead of
> cgroup_mutex. This also removes the lockdep warning triggered during
> cpu offlining (see 0009).
>
> Note that this leaves memcg as the only external user of cgroup_mutex.
> Michal, Kame, can you guys please convert memcg to use its own locking
> too?
Not totally. There is still one mention to the cgroup_lock():
static void cpuset_do_move_task(struct task_struct *tsk,
struct cgroup_scanner *scan)
{
struct cgroup *new_cgroup = scan->data;
cgroup_lock();
cgroup_attach_task(new_cgroup, tsk);
cgroup_unlock();
}
And similar problem to this, is the one we have in memcg: We need to
somehow guarantee that no tasks will join the cgroup for some time -
this is why we hold the lock in memcg.
Just calling a function that internally calls the cgroup lock won't do
much, since it won't solve any dependencies - where it is called matters
little.
What I'll try to do, is to come with another specialized lock in cgroup
just for this case. So after taking the cgroup lock, we would also take
an extra lock if we are adding another entry - be it task or children -
to the cgroup.
cpuset and memcg could then take that lock as well, explicitly or
implicitly.
How does it sound?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists