[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061128012839.GU15364@stusta.de>
Date: Tue, 28 Nov 2006 02:28:39 +0100
From: Adrian Bunk <bunk@...sta.de>
To: linux-kernel@...r.kernel.org
Cc: Ingo Molnar <mingo@...e.hu>
Subject: [2.6 patch] cleanup arch/i386/kernel/smpboot.c:smp_tune_scheduling()
This patch contains the following cleanups:
- remove the write-only local variable "bandwidth"
- don't set "max_cache_size" in the (cachesize < 0) case:
that's already handled in kernel/sched.c:measure_migration_cost()
Signed-off-by: Adrian Bunk <bunk@...sta.de>
---
arch/i386/kernel/smpboot.c | 29 +++++------------------------
1 file changed, 5 insertions(+), 24 deletions(-)
--- linux-2.6.19-rc6-mm1/arch/i386/kernel/smpboot.c.old 2006-11-27 23:36:48.000000000 +0100
+++ linux-2.6.19-rc6-mm1/arch/i386/kernel/smpboot.c 2006-11-27 23:48:16.000000000 +0100
@@ -1127,34 +1127,15 @@
}
#endif
-static void smp_tune_scheduling (void)
+static void smp_tune_scheduling(void)
{
unsigned long cachesize; /* kB */
- unsigned long bandwidth = 350; /* MB/s */
- /*
- * Rough estimation for SMP scheduling, this is the number of
- * cycles it takes for a fully memory-limited process to flush
- * the SMP-local cache.
- *
- * (For a P5 this pretty much means we will choose another idle
- * CPU almost always at wakeup time (this is due to the small
- * L1 cache), on PIIs it's around 50-100 usecs, depending on
- * the cache size)
- */
- if (!cpu_khz) {
- /*
- * this basically disables processor-affinity
- * scheduling on SMP without a TSC.
- */
- return;
- } else {
+ if (cpu_khz) {
cachesize = boot_cpu_data.x86_cache_size;
- if (cachesize == -1) {
- cachesize = 16; /* Pentiums, 2x8kB cache */
- bandwidth = 100;
- }
- max_cache_size = cachesize * 1024;
+
+ if (cachesize > 0)
+ max_cache_size = cachesize * 1024;
}
}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists