[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1288097074.15336.211.camel@twins>
Date: Tue, 26 Oct 2010 14:44:34 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Venkatesh Pallipadi <venki@...gle.com>
Cc: Damien Wyart <damien.wyart@...e.fr>,
Chase Douglas <chase.douglas@...onical.com>,
Ingo Molnar <mingo@...e.hu>, tmhikaru@...il.com,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org
Subject: Re: High CPU load when machine is idle (related to PROBLEM:
Unusually high load average when idle in 2.6.35, 2.6.35.1 and later)
On Mon, 2010-10-25 at 12:12 +0200, Peter Zijlstra wrote:
> On Fri, 2010-10-22 at 16:03 -0700, Venkatesh Pallipadi wrote:
> > I started making small changes to the code, but none of the change helped much.
> > I think the problem with the current code is that, even though idle CPUs
> > update load, the fold only happens when one of the CPU is busy
> > and we end up taking its load into global load.
> >
> > So, I tried to simplify things and doing the updates directly from idle loop.
> > This is only a test patch, and eventually we need to hook it off somewhere
> > else, instead of idle loop and also this is expected work only as x86_64
> > right now.
> >
> > Peter: Do you think something like this will work? loadavg went
> > quite on two of my test systems after this change (4 cpu and 24 cpu).
>
> Not really, CPUs can stay idle for _very_ long times (!x86 cpus that
> don't have crappy timers like HPET which roll around every 2-4 seconds).
>
> But all CPUs staying idle for a long time is exactly the scenario you
> fix before using the decay_load_misses() stuff, except that is for the
> load-balancer per-cpu load numbers not the global cpu load avg. Won't a
> similar approach work here?
The crude patch would be something like the below.. a smarter patch will
try and avoid that loop.
---
include/linux/sched.h | 2 +-
kernel/sched.c | 20 +++++++++-----------
kernel/timer.c | 2 +-
3 files changed, 11 insertions(+), 13 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 7a6e81f..84c1bf1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -143,7 +143,7 @@ extern unsigned long nr_iowait_cpu(int cpu);
extern unsigned long this_cpu_load(void);
-extern void calc_global_load(void);
+extern void calc_global_load(int ticks);
extern unsigned long get_parent_ip(unsigned long addr);
diff --git a/kernel/sched.c b/kernel/sched.c
index 41f1869..49a2baf 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3171,22 +3171,20 @@ calc_load(unsigned long load, unsigned long exp, unsigned long active)
* calc_load - update the avenrun load estimates 10 ticks after the
* CPUs have updated calc_load_tasks.
*/
-void calc_global_load(void)
+void calc_global_load(int ticks)
{
- unsigned long upd = calc_load_update + 10;
long active;
- if (time_before(jiffies, upd))
- return;
-
- active = atomic_long_read(&calc_load_tasks);
- active = active > 0 ? active * FIXED_1 : 0;
+ while (!time_before(jiffies, calc_load_update + 10)) {
+ active = atomic_long_read(&calc_load_tasks);
+ active = active > 0 ? active * FIXED_1 : 0;
- avenrun[0] = calc_load(avenrun[0], EXP_1, active);
- avenrun[1] = calc_load(avenrun[1], EXP_5, active);
- avenrun[2] = calc_load(avenrun[2], EXP_15, active);
+ avenrun[0] = calc_load(avenrun[0], EXP_1, active);
+ avenrun[1] = calc_load(avenrun[1], EXP_5, active);
+ avenrun[2] = calc_load(avenrun[2], EXP_15, active);
- calc_load_update += LOAD_FREQ;
+ calc_load_update += LOAD_FREQ;
+ }
}
/*
diff --git a/kernel/timer.c b/kernel/timer.c
index d6ccb90..9f82b2a 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -1297,7 +1297,7 @@ void do_timer(unsigned long ticks)
{
jiffies_64 += ticks;
update_wall_time();
- calc_global_load();
+ calc_global_load(ticks);
}
#ifdef __ARCH_WANT_SYS_ALARM
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists