[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <49af242d.1c07d00a.32d5.ffffc019@mx.google.com>
Date: Thu, 5 Mar 2009 01:27:02 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org
Subject: [PATCH 1/2] sched: don't rebalance if attached on NULL domain
Impact: fix function graph trace hang / drop pointless softirq on UP
While debugging a function graph trace hang on an old PII, I saw that it
consumed most of its time on the timer interrupt.
And the domain rebalancing softirq was the most concerned.
The timer interrupt calls trigger_load_balance() which will decide if it is
worth to schedule a rebalancing softirq.
In case of builtin UP kernel, no problem arises because there is no
domain question.
In case of builtin SMP kernel running on an SMP box, still no problem,
the softirq will be raised each time we reach the next_balance time.
In case of builtin SMP kernel running on a UP box (most distros provide default SMP
kernels, whatever the box you have), then the CPU is attached to the NULL sched domain.
So a kind of unexpected behaviour happen:
trigger_load_balance() -> raises the rebalancing softirq
later on softirq: run_rebalance_domains() -> rebalance_domains() where
the for_each_domain(cpu, sd) is not taken because of the NULL domain we are attached at.
Which means rq->next_balance is never updated.
So on the next timer tick, we will enter trigger_load_balance() which will always reschedule()
the rebalacing softirq:
if (time_after_eq(jiffies, rq->next_balance))
raise_softirq(SCHED_SOFTIRQ);
So for each tick, we process this pointless softirq.
This patch fixes it by checking if we are attached to the null domain before raising the softirq,
another possible fix would be to set the maximal possible JIFFIES value to rq->next_balance if we are
attached to the NULL domain.
Signed-off-by: Frederic Weisbecker <fweisbec@...il.com>
---
kernel/sched.c | 9 ++++++++-
1 files changed, 8 insertions(+), 1 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 7335a65..89e2ca0 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -680,6 +680,11 @@ inline void update_rq_clock(struct rq *rq)
rq->clock = sched_clock_cpu(cpu_of(rq));
}
+static inline int on_null_domain(int cpu)
+{
+ return !rcu_dereference(cpu_rq(cpu)->sd);
+}
+
/*
* Tunables that become constants when CONFIG_SCHED_DEBUG is off:
*/
@@ -4267,7 +4272,9 @@ static inline void trigger_load_balance(struct rq *rq, int cpu)
cpumask_test_cpu(cpu, nohz.cpu_mask))
return;
#endif
- if (time_after_eq(jiffies, rq->next_balance))
+ /* Don't need to rebalance while attached to NULL domain */
+ if (time_after_eq(jiffies, rq->next_balance) &&
+ likely(!on_null_domain(cpu)))
raise_softirq(SCHED_SOFTIRQ);
}
--
1.6.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists