lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 29 Nov 2016 10:17:32 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Peter Zijlstra <peterz@...radead.org>
cc:     LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>, x86@...nel.org,
        Borislav Petkov <bp@...en8.de>, Yinghai Lu <yinghai@...nel.org>
Subject: Re: [patch 4/8] x86/tsc: Verify TSC_ADJUST from idle

On Mon, 21 Nov 2016, Peter Zijlstra wrote:
> On Mon, Nov 21, 2016 at 09:16:44AM +0100, Thomas Gleixner wrote:
> > On Sun, 20 Nov 2016, Peter Zijlstra wrote:
> > > On Sat, Nov 19, 2016 at 01:47:37PM -0000, Thomas Gleixner wrote:
> > > > When entering idle, it's a good oportunity to verify that the TSC_ADJUST
> > > > MSR has not been tampered with (BIOS hiding SMM cycles). If tampering is
> > > > detected, emit a warning and restore it to the previous value.
> > > 
> > > > +++ b/arch/x86/kernel/process.c
> > > > @@ -277,6 +277,7 @@ void exit_idle(void)
> > > >  
> > > >  void arch_cpu_idle_enter(void)
> > > >  {
> > > > +	tsc_verify_tsc_adjust();
> > > >  	local_touch_nmi();
> > > >  	enter_idle();
> > > >  }
> > > 
> > > Doing a RDMSR on the idle path isn't going to be popular. That path is
> > > already way too slow.
> > 
> > Of course we can ratelimit that MSR read with jiffies, but do you have any
> > better suggestion aside of doing it timer based?
> 
> Not really :/ 

Revamped patch below.

Thanks,

	tglx

8<-----------------------

Subject: x86/tsc: Verify TSC_ADJUST from idle
From: Thomas Gleixner <tglx@...utronix.de>
Date: Sat, 19 Nov 2016 13:47:37 -0000

When entering idle, it's a good oportunity to verify that the TSC_ADJUST
MSR has not been tampered with (BIOS hiding SMM cycles). If tampering is
detected, emit a warning and restore it to the previous value.

This is especially important for machines, which mark the TSC reliable
because there is no watchdog clocksource available (SoCs).

This is not sufficient for HPC (NOHZ_FULL) situations where a CPU never
goes idle, but adding a timer to do the check periodically is not an option
either. On a machine, which has this issue, the check triggeres right
during boot, so there is a decent chance that the sysadmin will notice.

Rate limit the check to once per second and warn only once per cpu.

Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Yinghai Lu <yinghai@...nel.org>
Cc: Borislav Petkov <bp@...en8.de>
Link: http://lkml.kernel.org/r/20161119134017.732180441@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>

---
 arch/x86/include/asm/tsc.h |    2 ++
 arch/x86/kernel/process.c  |    1 +
 arch/x86/kernel/tsc_sync.c |   37 ++++++++++++++++++++++++++++++++++---
 3 files changed, 37 insertions(+), 3 deletions(-)

--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -50,8 +50,10 @@ extern void check_tsc_sync_target(void);
 
 #ifdef CONFIG_X86_TSC
 extern void tsc_store_and_check_tsc_adjust(void);
+extern void tsc_verify_tsc_adjust(void);
 #else
 static inline void tsc_store_and_check_tsc_adjust(void) { }
+static inline void tsc_verify_tsc_adjust(void) { }
 #endif
 
 extern int notsc_setup(char *);
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -277,6 +277,7 @@ void exit_idle(void)
 
 void arch_cpu_idle_enter(void)
 {
+	tsc_verify_tsc_adjust();
 	local_touch_nmi();
 	enter_idle();
 }
--- a/arch/x86/kernel/tsc_sync.c
+++ b/arch/x86/kernel/tsc_sync.c
@@ -22,12 +22,42 @@
 #include <asm/tsc.h>
 
 struct tsc_adjust {
-	s64	bootval;
-	s64	adjusted;
+	s64		bootval;
+	s64		adjusted;
+	unsigned long	lastcheck;
+	bool		warned;
 };
 
 static DEFINE_PER_CPU(struct tsc_adjust, tsc_adjust);
 
+void tsc_verify_tsc_adjust(void)
+{
+	struct tsc_adjust *adj = this_cpu_ptr(&tsc_adjust);
+	s64 curval;
+
+	if (!boot_cpu_has(X86_FEATURE_TSC_ADJUST))
+		return;
+
+	/* Rate limit the MSR check */
+	if (time_before(jiffies, adj->lastcheck + HZ))
+		return;
+
+	adj->lastcheck = jiffies;
+
+	rdmsrl(MSR_IA32_TSC_ADJUST, curval);
+	if (adj->adjusted == curval)
+		return;
+
+	/* Restore the original value */
+	wrmsrl(MSR_IA32_TSC_ADJUST, adj->adjusted);
+
+	if (!adj->warned) {
+		pr_warn(FW_BUG "TSC ADJUST differs: CPU%u %lld --> %lld. Restoring\n",
+			smp_processor_id(), adj->adjusted, curval);
+		adj->warned = true;
+	}
+}
+
 #ifndef CONFIG_SMP
 void __init tsc_store_and_check_tsc_adjust(void)
 {
@@ -40,7 +70,8 @@ void __init tsc_store_and_check_tsc_adju
 	rdmsrl(MSR_IA32_TSC_ADJUST, bootval);
 	cur->bootval = bootval;
 	cur->adjusted = bootval;
-	pr_info("TSC ADJUST: Boot CPU%u: %lld\n",cpu,  bootval);
+	cur->lastcheck = jiffies;
+	pr_info("TSC ADJUST: Boot CPU%u: %lld\n", cpu,  bootval);
 }
 
 #else /* !CONFIG_SMP */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ