[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B0EA88E.3030205@linux.vnet.ibm.com>
Date: Thu, 26 Nov 2009 17:10:54 +0100
From: Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
To: Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: Holger.Wolf@...ibm.com, epasch@...ibm.com
Subject: Missing recalculation of scheduler tunables in case of cpu hot add/remove
Hi everybody,
while testing different scheduler tunables I came across the function
sched_init_granularity which recalculates the values of
sysctl_sched_min_granularity, sysctl_sched_latency and
sysctl_sched_wakeup_granularity in reference to the number of cpu's
currently online on boot time. While someone could think the 1+
ilog2(num_online_cpus() factor might be wrong or suboptimal I wanted to
avoid that discussion (at least in this thread :-)).
What I consider more important at the moment is that there is no hook to
recalculate these values in case cpu hot add/remove takes place.
As an example someone could boot a machine with one online cpu and get
the low non scaled defaults, later on driven by load the system
activates more and more processors. Therefore the system could end up
having a large amount of cpus with non recalculated scheduler tunables.
I'm looking forward to all other solutions approaches that will come up,
so the following is just a suggestion.
We might store the corresponding 1cpu values in hidden variables and
rescale the effective ones on every cpu add/remove.
Additionally there would be the need for some logic to update the
corresponding 1cpu values every time a user sets new values via the proc
interface.
I already thought about potential rounding errors in the suggested
solution, but those might be better than systems being factors off the
value they should have.
--
GrĂ¼sse / regards, Christian Ehrhardt
IBM Linux Technology Center, Open Virtualization
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists