[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20091217035644.913059330@mini.kroah.org>
Date: Wed, 16 Dec 2009 19:55:12 -0800
From: Greg KH <gregkh@...e.de>
To: linux-kernel@...r.kernel.org, stable@...nel.org
Cc: stable-review@...nel.org, torvalds@...ux-foundation.org,
akpm@...ux-foundation.org, alan@...rguk.ukuu.org.uk,
Mike Galbraith <efault@....de>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>
Subject: [015/151] sched: Fix and clean up rate-limit newidle code
2.6.32-stable review patch. If anyone has any objections, please let us know.
------------------
From: Mike Galbraith <efault@....de>
commit eae0c9dfb534cb3449888b9601228efa6480fdb5 upstream.
Commit 1b9508f, "Rate-limit newidle" has been confirmed to fix
the netperf UDP loopback regression reported by Alex Shi.
This is a cleanup and a fix:
- moved to a more out of the way spot
- fix to ensure that balancing doesn't try to balance
runqueues which haven't gone online yet, which can
mess up CPU enumeration during boot.
Reported-by: Alex Shi <alex.shi@...el.com>
Reported-by: Zhang, Yanmin <yanmin_zhang@...ux.intel.com>
Signed-off-by: Mike Galbraith <efault@....de>
Acked-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
LKML-Reference: <1257821402.5648.17.camel@...ge.simson.net>
Signed-off-by: Ingo Molnar <mingo@...e.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@...e.de>
---
kernel/sched.c | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2386,17 +2386,6 @@ static int try_to_wake_up(struct task_st
if (rq != orig_rq)
update_rq_clock(rq);
- if (rq->idle_stamp) {
- u64 delta = rq->clock - rq->idle_stamp;
- u64 max = 2*sysctl_sched_migration_cost;
-
- if (delta > max)
- rq->avg_idle = max;
- else
- update_avg(&rq->avg_idle, delta);
- rq->idle_stamp = 0;
- }
-
WARN_ON(p->state != TASK_WAKING);
cpu = task_cpu(p);
@@ -2453,6 +2442,17 @@ out_running:
#ifdef CONFIG_SMP
if (p->sched_class->task_wake_up)
p->sched_class->task_wake_up(rq, p);
+
+ if (unlikely(rq->idle_stamp)) {
+ u64 delta = rq->clock - rq->idle_stamp;
+ u64 max = 2*sysctl_sched_migration_cost;
+
+ if (delta > max)
+ rq->avg_idle = max;
+ else
+ update_avg(&rq->avg_idle, delta);
+ rq->idle_stamp = 0;
+ }
#endif
out:
task_rq_unlock(rq, &flags);
@@ -4139,7 +4139,7 @@ static int load_balance(int this_cpu, st
unsigned long flags;
struct cpumask *cpus = __get_cpu_var(load_balance_tmpmask);
- cpumask_setall(cpus);
+ cpumask_copy(cpus, cpu_online_mask);
/*
* When power savings policy is enabled for the parent domain, idle
@@ -4302,7 +4302,7 @@ load_balance_newidle(int this_cpu, struc
int all_pinned = 0;
struct cpumask *cpus = __get_cpu_var(load_balance_tmpmask);
- cpumask_setall(cpus);
+ cpumask_copy(cpus, cpu_online_mask);
/*
* When power savings policy is enabled for the parent domain, idle
@@ -9542,6 +9542,8 @@ void __init sched_init(void)
rq->cpu = i;
rq->online = 0;
rq->migration_thread = NULL;
+ rq->idle_stamp = 0;
+ rq->avg_idle = 2*sysctl_sched_migration_cost;
INIT_LIST_HEAD(&rq->migration_queue);
rq_attach_root(rq, &def_root_domain);
#endif
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists