[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <487CDE15.1070401@qualcomm.com>
Date: Tue, 15 Jul 2008 10:27:49 -0700
From: Max Krasnyansky <maxk@...lcomm.com>
To: Paul Menage <menage@...gle.com>
CC: mingo@...e.hu, pj@....com, linux-kernel@...r.kernel.org,
a.p.zijlstra@...llo.nl
Subject: Re: [PATCH] cpuset: Make rebuild_sched_domains() usable from any
context
Paul Menage wrote:
> On Tue, Jul 15, 2008 at 4:44 AM, Max Krasnyansky <maxk@...lcomm.com> wrote:
>> From: Max Krasnyanskiy <maxk@...lcomm.com>
>>
>> I do not really like the current solution of dropping cgroup lock
>> but it shows what I have in mind in general.
>
> I think that dropping the cgroup lock will open up races for cpusets.
> The idea of a separate workqueue/thread to do the sched domain
> rebuilding is simplest.
Actually I audited (to the best of my knowledge) all the paths in
cpusets and rebuild_sched_domains() is the last action. ie We drop the
lock right after it anyways. It's just it's embedded deep in the call
stack and therefor I cannot drop it at the higher level.
The only path where I think it's not safe is the cgroup destroy thing
where we do
cgroup.c
cgroup_lock();
for_each_cgroups(...)
cg->destroy();
cgroup_unlock();
So in theory it's just that one patch that really needs the workqueue
trick. But I do agree that it'll make it less tricky across the board.
So I'll pick up you work queue based patch, convert it to single
threaded, bang on a bit later today and send a patch on top of this one.
Max
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists