lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 25 Nov 2017 04:38:08 -0800
From:   tip-bot for Nadav Amit <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     namit@...are.com, mingo@...nel.org, rkrcmar@...hat.com,
        luto@...nel.org, pbonzini@...hat.com, tony.luck@...el.com,
        tglx@...utronix.de, bp@...en8.de, hpa@...or.com,
        linux-kernel@...r.kernel.org
Subject: [tip:x86/urgent] x86/tlb: Disable interrupts when changing CR4

Commit-ID:  9d0b62328d34c7044114d4f4281981d4c537c4ba
Gitweb:     https://git.kernel.org/tip/9d0b62328d34c7044114d4f4281981d4c537c4ba
Author:     Nadav Amit <namit@...are.com>
AuthorDate: Fri, 24 Nov 2017 19:29:07 -0800
Committer:  Thomas Gleixner <tglx@...utronix.de>
CommitDate: Sat, 25 Nov 2017 13:28:43 +0100

x86/tlb: Disable interrupts when changing CR4

CR4 modifications are implemented as RMW operations which update a shadow
variable and write the result to CR4. The RMW operation is protected by
preemption disable, but there is no enforcement or debugging mechanism.

CR4 modifications happen also in interrupt context via
__native_flush_tlb_global(). This implementation does not affect a
interrupted thread context CR4 operation, because the CR4 toggle restores
the original content and does not modify the shadow variable.

So the current situation seems to be safe, but a recent patch tried to add
an actual RMW operation in interrupt context, which will cause subtle
corruptions.

To prevent that and make the CR4 handling future proof:

 - Add a lockdep assertion to __cr4_set() which will catch interrupt
   enabled invocations

 - Disable interrupts in the cr4 manipulator inlines

 - Rename cr4_toggle_bits() to cr4_toggle_bits_irqsoff(). This is called
   from __switch_to_xtra() where interrupts are already disabled and
   performance matters.

All other call sites are not performance critical, so the extra overhead of
an additional local_irq_save/restore() pair is not a problem. If new call
sites care about performance then the necessary _irqsoff() variants can be
added.

[ tglx: Condensed the patch by moving the irq protection inside the
  	manipulator functions. Updated changelog ]

Signed-off-by: Nadav Amit <namit@...are.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Luck <tony.luck@...el.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: nadav.amit@...il.com
Cc: linux-edac@...r.kernel.org
Link: https://lkml.kernel.org/r/20171125032907.2241-3-namit@vmware.com
---
 arch/x86/include/asm/tlbflush.h | 11 ++++++++---
 arch/x86/kernel/process.c       |  2 +-
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index e736f7f..877b5c1 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -175,6 +175,7 @@ static inline void cr4_init_shadow(void)
 
 static inline void __cr4_set(unsigned long cr4)
 {
+	lockdep_assert_irqs_disabled();
 	this_cpu_write(cpu_tlbstate.cr4, cr4);
 	__write_cr4(cr4);
 }
@@ -182,24 +183,28 @@ static inline void __cr4_set(unsigned long cr4)
 /* Set in this cpu's CR4. */
 static inline void cr4_set_bits(unsigned long mask)
 {
-	unsigned long cr4;
+	unsigned long cr4, flags;
 
+	local_irq_save(flags);
 	cr4 = this_cpu_read(cpu_tlbstate.cr4);
 	if ((cr4 | mask) != cr4)
 		__cr4_set(cr4 | mask);
+	local_irq_restore(flags);
 }
 
 /* Clear in this cpu's CR4. */
 static inline void cr4_clear_bits(unsigned long mask)
 {
-	unsigned long cr4;
+	unsigned long cr4, flags;
 
+	local_irq_save(flags);
 	cr4 = this_cpu_read(cpu_tlbstate.cr4);
 	if ((cr4 & ~mask) != cr4)
 		__cr4_set(cr4 & ~mask);
+	local_irq_restore(flags);
 }
 
-static inline void cr4_toggle_bits(unsigned long mask)
+static inline void cr4_toggle_bits_irqsoff(unsigned long mask)
 {
 	unsigned long cr4;
 
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 97fb3e5..bb988a2 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -299,7 +299,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
 	}
 
 	if ((tifp ^ tifn) & _TIF_NOTSC)
-		cr4_toggle_bits(X86_CR4_TSD);
+		cr4_toggle_bits_irqsoff(X86_CR4_TSD);
 
 	if ((tifp ^ tifn) & _TIF_NOCPUID)
 		set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ