[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181211222326.14581-5-bp@alien8.de>
Date: Tue, 11 Dec 2018 23:23:26 +0100
From: Borislav Petkov <bp@...en8.de>
To: LKML <linux-kernel@...r.kernel.org>
Cc: X86 ML <x86@...nel.org>, Andy Lutomirski <luto@...capital.net>,
"H. Peter Anvin" <hpa@...or.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Tom Lendacky <thomas.lendacky@....com>,
Andy Lutomirski <luto@...nel.org>,
John Stultz <john.stultz@...aro.org>
Subject: [RFC PATCH 4/4] x86/TSC: Use RDTSCP
From: Borislav Petkov <bp@...e.de>
Currently, the kernel uses
[LM]FENCE; RDTSC
in the timekeeping code, to guarantee monotonicity of time where the
*FENCE is selected based on vendor.
Replace that sequence with RDTSCP which is faster or on-par and gives
the same guarantees.
A microbenchmark on Intel shows that the change is on-par.
On AMD, the change is either on-par with the current LFENCE-prefixed
RDTSC and some are slightly better with RDTSCP.
The comparison is done with the LFENCE-prefixed RDTSC (and not with the
MFENCE-prefixed one, as one would normally expect) because all modern
AMD families make LFENCE serializing and thus avoid the heavy MFENCE by
effectively enabling X86_FEATURE_LFENCE_RDTSC.
Co-developed-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Borislav Petkov <bp@...e.de>
Cc: Tom Lendacky <thomas.lendacky@....com>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: John Stultz <john.stultz@...aro.org>
Cc: x86@...nel.org
Link: https://lkml.kernel.org/r/20181119184556.11479-1-bp@alien8.de
---
arch/x86/include/asm/msr.h | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 91e4cf189914..5cc3930cb465 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -217,6 +217,8 @@ static __always_inline unsigned long long rdtsc(void)
*/
static __always_inline unsigned long long rdtsc_ordered(void)
{
+ DECLARE_ARGS(val, low, high);
+
/*
* The RDTSC instruction is not ordered relative to memory
* access. The Intel SDM and the AMD APM are both vague on this
@@ -227,9 +229,19 @@ static __always_inline unsigned long long rdtsc_ordered(void)
* ordering guarantees as reading from a global memory location
* that some other imaginary CPU is updating continuously with a
* time stamp.
+ *
+ * Thus, use the preferred barrier on the respective CPU, aiming for
+ * RDTSCP as the default.
*/
- barrier_nospec();
- return rdtsc();
+ asm volatile(ALTERNATIVE_3("rdtsc",
+ "mfence; rdtsc", X86_FEATURE_MFENCE_RDTSC,
+ "lfence; rdtsc", X86_FEATURE_LFENCE_RDTSC,
+ "rdtscp", X86_FEATURE_RDTSCP)
+ : EAX_EDX_RET(val, low, high)
+ /* RDTSCP clobbers ECX with MSR_TSC_AUX. */
+ :: "ecx");
+
+ return EAX_EDX_VAL(val, low, high);
}
static inline unsigned long long native_read_pmc(int counter)
--
2.19.1
Powered by blists - more mailing lists