From: Frederic Weisbecker While load balancing an rq target, we look for the busiest group. This operation may require an uptodate rq clock if we end up calling scale_rt_power(). To this end, update it manually if the target is running tickless. DOUBT: don't we actually also need this in vanilla kernel, in case this_cpu is in dyntick-idle mode? Signed-off-by: Frederic Weisbecker Cc: Alessio Igor Bogani Cc: Andrew Morton Cc: Avi Kivity Cc: Chris Metcalf Cc: Christoph Lameter Cc: Daniel Lezcano Cc: Geoff Levand Cc: Gilad Ben Yossef Cc: Hakan Akkan Cc: Ingo Molnar Cc: Kevin Hilman Cc: Max Krasnyansky Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Stephen Hemminger Cc: Steven Rostedt Cc: Sven-Thorsten Dietrich Cc: Thomas Gleixner --- kernel/sched/fair.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 89e816e..b1b9a20 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4252,6 +4252,19 @@ static int load_balance(int this_cpu, struct rq *this_rq, schedstat_inc(sd, lb_count[idle]); + /* + * find_busiest_group() may need an uptodate cpu clock + * for find_busiest_group() (see scale_rt_power()). If + * the CPU is nohz, it's clock may be stale. + */ + if (cpuset_cpu_adaptive_nohz(this_cpu)) { + local_irq_save(flags); + raw_spin_lock(&this_rq->lock); + update_rq_clock(this_rq); + raw_spin_unlock(&this_rq->lock); + local_irq_restore(flags); + } + redo: group = find_busiest_group(&env, balance); -- 1.7.10.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/