[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100301174815.GC8224@sgi.com>
Date: Mon, 1 Mar 2010 11:48:15 -0600
From: Dimitri Sivanich <sivanich@....com>
To: linux-kernel@...r.kernel.org
Cc: "H. Peter Anvin" <hpa@...or.com>, venkatesh.pallipadi@...el.com,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>
Subject: [PATCH v2] x86: Fix sched_clock_cpu for systems with
unsynchronized TSC
On UV systems, the TSC is not synchronized across blades. The
sched_clock_cpu() function is returning values that can go backwards
(I've seen as much as 8 seconds) when switching between cpus.
As each cpu comes up, early_init_intel() will currently set the
sched_clock_stable flag true. When mark_tsc_unstable() runs, it clears
the flag, but this only occurs once (the first time a cpu comes up whose
TSC is not synchronized with cpu 0). After this, early_init_intel() will
set the flag again as the next cpu comes up.
Only set sched_clock_stable if tsc has not been marked unstable.
Signed-off-by: Dimitri Sivanich <sivanich@....com>
---
Only affects x86 arch.
arch/x86/kernel/cpu/intel.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
Index: linux/arch/x86/kernel/cpu/intel.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/intel.c
+++ linux/arch/x86/kernel/cpu/intel.c
@@ -70,7 +70,8 @@ static void __cpuinit early_init_intel(s
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
- sched_clock_stable = 1;
+ if (!check_tsc_unstable())
+ sched_clock_stable = 1;
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists