[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070816010426.GH10033@linux-os.sc.intel.com>
Date: Wed, 15 Aug 2007 18:04:26 -0700
From: "Siddha, Suresh B" <suresh.b.siddha@...el.com>
To: mingo@...e.hu
Cc: nickpiggin@...oo.com.au, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org
Subject: [patch] sched: skip updating rq's next_balance under null SD
Was playing with sched_smt_power_savings/sched_mc_power_savings and found
out that while the scheduler domains are reconstructed when sysfs settings
change, rebalance_domains() can get triggered with null domain on other cpus,
which is setting next_balance to jiffies + 60*HZ. Resulting in no idle/busy
balancing for 60 seconds.
Fix this.
Signed-off-by: Suresh Siddha <suresh.b.siddha@...el.com>
---
diff --git a/kernel/sched.c b/kernel/sched.c
index 45e17b8..74565c0 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3020,6 +3020,7 @@ static inline void rebalance_domains(int cpu, enum cpu_idle_type idle)
struct sched_domain *sd;
/* Earliest time when we have to do rebalance again */
unsigned long next_balance = jiffies + 60*HZ;
+ int update_next_balance = 0;
for_each_domain(cpu, sd) {
if (!(sd->flags & SD_LOAD_BALANCE))
@@ -3056,8 +3057,10 @@ static inline void rebalance_domains(int cpu, enum cpu_idle_type idle)
if (sd->flags & SD_SERIALIZE)
spin_unlock(&balancing);
out:
- if (time_after(next_balance, sd->last_balance + interval))
+ if (time_after(next_balance, sd->last_balance + interval)) {
next_balance = sd->last_balance + interval;
+ update_next_balance = 1;
+ }
/*
* Stop the load balance at this level. There is another
@@ -3067,7 +3070,14 @@ out:
if (!balance)
break;
}
- rq->next_balance = next_balance;
+
+ /*
+ * next_balance will be updated only when there is a need.
+ * When the cpu is attached to null domain for ex, it will not be
+ * updated.
+ */
+ if (likely(update_next_balance))
+ rq->next_balance = next_balance;
}
/*
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists