[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5171EDC1.4070805@huawei.com>
Date: Sat, 20 Apr 2013 09:22:09 +0800
From: Li Zefan <lizefan@...wei.com>
To: Tejun Heo <tj@...nel.org>
CC: LKML <linux-kernel@...r.kernel.org>,
Cgroups <cgroups@...r.kernel.org>,
Containers <containers@...ts.linux-foundation.org>,
Glauber Costa <glommer@...allels.com>
Subject: Re: [PATCH 09/10] cpuset: allow to keep tasks in empty cpusets
On 2013/4/20 4:58, Tejun Heo wrote:
> Hello,
>
> On Fri, Apr 19, 2013 at 08:29:24PM +0800, Li Zefan wrote:
>> +static void update_tasks_cpumask_hier(struct cpuset *root_cs,
>> + bool update_root, struct ptr_heap *heap)
>> +{
>> + struct cpuset *cp;
>> + struct cgroup *pos_cgrp;
>> +
>> + if (update_root)
>> + update_tasks_cpumask(root_cs, heap);
>> +
>> + rcu_read_lock();
>> + cpuset_for_each_descendant_pre(cp, pos_cgrp, root_cs) {
>> + /* skip the whole subtree if @cp have some CPU */
>> + if (!cpumask_empty(cp->cpus_allowed)) {
>> + pos_cgrp = cgroup_rightmost_descendant(pos_cgrp);
>> + continue;
>> + }
>> +
>> + update_tasks_cpumask(cp, heap);
>> + }
>> + rcu_read_unlock();
>
> I don't think we can call update_tasks_cpumask() under
> rcu_read_lock(). It calls into set_cpus_allowed_ptr() which may
> block, so you'll probably have to punt it to a work item like how
Oh, will fix.
> migration is being done. Another approach would be converting cgroup
> to use SRCU instead, which would lessen pain on other places too. The
> only problem there would be that srcu_read_lock() is a bit more
> expensive than rcu_read_lock(). I'm not sure whether that'd show up
> in some hot path or not. Ideas?
>
I guess we can live with rcu_read_lock() for now, and see if we can
change it to srcu when other significant changes are done in all
cgroup controllers. (hierarchy support in blkcg, etc..)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists