[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <487D0DC4.8090103@qualcomm.com>
Date: Tue, 15 Jul 2008 13:51:16 -0700
From: Max Krasnyansky <maxk@...lcomm.com>
To: Paul Menage <menage@...gle.com>
CC: mingo@...e.hu, pj@....com, linux-kernel@...r.kernel.org,
a.p.zijlstra@...llo.nl
Subject: Re: [PATCH] cpuset: Make rebuild_sched_domains() usable from any
context
Paul Menage wrote:
> On Tue, Jul 15, 2008 at 4:44 AM, Max Krasnyansky <maxk@...lcomm.com> wrote:
>> From: Max Krasnyanskiy <maxk@...lcomm.com>
>>
>> I do not really like the current solution of dropping cgroup lock
>> but it shows what I have in mind in general.
>
> I think that dropping the cgroup lock will open up races for cpusets.
> The idea of a separate workqueue/thread to do the sched domain
> rebuilding is simplest.
Actually I think we do not have to make it super strict "only rebuilt
from that thread rule". I'd only off-load cpuset_write64(),
update_flag() to the thread. It'd be nice to keep hotplug path clean
synchronous. It's synchronous without cpusets so there is really no good
reason when it needs to be async without them. And the toughest part is
not even hotplug where lock nesting is pretty clear
get_online_cpus() ->
rebuild_sched_domains() ->
cgroup_lock();
// Build cpumaps
cpuset_callback_lock();
...
cpuset_callback_unlock();
cgroup_unlock();
partition_sched_domains() ->
mutex_unlock(&sched_domains_mutex);
// Rebuild sched domains
mutex_unlock(&sched_domains_mutex);
put_online_cpus()
It's the other paths where cgroup_lock() is taken by cgroups before even
calling into cpusets, like cgroup destroy case.
So I think we should just off-load those.
Max
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists