[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48991973.90109@qualcomm.com>
Date: Tue, 05 Aug 2008 20:24:35 -0700
From: Max Krasnyansky <maxk@...lcomm.com>
To: Paul Jackson <pj@....com>
CC: mingo@...e.hu, linux-kernel@...r.kernel.org, menage@...gle.com,
a.p.zijlstra@...llo.nl, vegard.nossum@...il.com,
lizf@...fujitsu.com
Subject: Re: [PATCH] cpuset: Rework sched domains and CPU hotplug handling
(2.6.27-rc1)
Paul Jackson wrote:
> Max wrote:
>> It could I guess. But the questions is why ?
>> I mean the only reason we've introduced workqueue is because lock
>> nesting got too complicated. Otherwise in all those paths we're already
>> in a process context and there is no need to schedule a workqueue.
>
> My priorities are different than yours.
>
> You look at a code path that, in some cases, is more involved than
> necessary, and recommend providing an alternative code path, for
> the cases that can get by executing (significantly) fewer CPU cycles.
>
> I look at the number of code paths, lines of code, duplicated code
> and number of tests and conditions in the source code, and ask how
> to reduce them. I want the least amount of source code, the fewest
> alternative paths through that code, the smallest number of logical
> tests and code variants, the least amount of code duplication.
>
> The key in this case is that I prefer having just one code path by
> which all rebuilds of sched domains are done. Since that rebuild must
> be asynchronous for some cases (to avoid ABBA lock nesting problems)
> therefore let all sched domains be done by that async path.
>
> The fewer the number of code path variants, the easier it is to
> understand the code, and the harder it is to break the code.
>
> Except for frequently executed code paths, which this surely is not,
> minimizing software maintenance costs is far more important to me than
> minimizing CPU cycles.
Actually I was not really trying to minimize cpu cycles or anything. But it
does seem like an overkill to schedule a workqueue just because we do not
want to expose the fact that rebuild_sched_domains() depends get_online_cpus().
More importantly though if you think about it I'm actually not introducing
any new alternative paths (besides async_rebuild_sched_domains() itself).
Code paths in question have been synchronous since day one, and have been
tested that way. Now you're saying lets make them asynchronous. Peter already
had a concern about async rebuilds from within the cpuset itself. I think those
are fine but I definitely do not want to make domain rebuilds in the cpu hotplug
path asynchronous.
So I'd argue that async_rebuild_domains() is the new path which we have to
make async due to lock nesting issues. The other paths (cpu hotplug, topology
updates, SMT and MC power settings updates) are already synchronous and have
no lock nesting issues and therefore do not need to change (async vs sync).
Also IMO it makes sense to keep both CONFIG_CPUSETS and !CONFIG_CPUSETS
handling as similar as possible to avoid surprises. With your proposal when
cpusets are disabled arch_reinit_sched_domain() will be synchronous, when
they are enabled it will asynchronous.
If you feel strongly about this we can discuss this some more and try it out.
But I really do not see those maintenance saving in this case. We'd actually
be changing more things.
Max
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists