lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 12 Jan 2018 10:45:48 -0800
From:   Andi Kleen <andi@...stfloor.org>
To:     tglx@...utronix.de
Cc:     x86@...nel.org, dwmw@...zon.co.uk, linux-kernel@...r.kernel.org,
        pjt@...gle.com, torvalds@...ux-foundation.org,
        gregkh@...ux-foundation.org, peterz@...radead.org,
        luto@...capital.net, thomas.lendacky@....com,
        arjan.van.de.ven@...el.com, Andi Kleen <ak@...ux.intel.com>
Subject: [PATCH 2/4] x86/retpoline: Avoid return buffer underflows on context switch

From: Andi Kleen <ak@...ux.intel.com>

CPUs have return buffers which store the return address for
RET to predict function returns. Some CPUs (Skylake, some Broadwells)
can fall back to indirect branch prediction on return buffer underflow.

With retpoline we want to avoid uncontrolled indirect branches,
which could be poisoned by ring 3, so we need to avoid uncontrolled
return buffer underflows in the kernel.

This can happen when we're context switching from a shallower to a
deeper kernel stack.  The deeper kernel stack would eventually underflow
the return buffer, which again would fall back to the indirect branch predictor.

The other thread could be running a system call trigger
by an attacker too, so the context switch would help the attacked
thread to fall back to an uncontrolled indirect branch,
which then would use the values passed in by the attacker.

To guard against this fill the return buffer with controlled
content during context switch. This prevents any underflows.

This is only enabled on Skylake.

Signed-off-by: Andi Kleen <ak@...ux.intel.com>
---
 arch/x86/entry/entry_32.S | 14 ++++++++++++++
 arch/x86/entry/entry_64.S | 14 ++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index a1f28a54f23a..bbecb7c2f6cb 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -250,6 +250,20 @@ ENTRY(__switch_to_asm)
 	popl	%ebx
 	popl	%ebp
 
+	/*
+	 * When we switch from a shallower to a deeper call stack
+	 * the call stack will underflow in the kernel in the next task.
+	 * This could cause the CPU to fall back to indirect branch
+	 * prediction, which may be poisoned.
+	 *
+	 * To guard against that always fill the return stack with
+	 * known values.
+	 *
+	 * We do this in assembler because it needs to be before
+	 * any calls on the new stack, and this can be difficult to
+	 * ensure in a complex C function like __switch_to.
+	 */
+	FILL_RETURN_BUFFER %ecx, RSB_FILL_LOOPS, X86_FEATURE_RETURN_UNDERFLOW
 	jmp	__switch_to
 END(__switch_to_asm)
 
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 59874bc1aed2..3caac129cd07 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -495,6 +495,20 @@ ENTRY(__switch_to_asm)
 	popq	%rbx
 	popq	%rbp
 
+	/*
+	 * When we switch from a shallower to a deeper call stack
+	 * the call stack will underflow in the kernel in the next task.
+	 * This could cause the CPU to fall back to indirect branch
+	 * prediction, which may be poisoned.
+	 *
+	 * To guard against that always fill the return stack with
+	 * known values.
+	 *
+	 * We do this in assembler because it needs to be before
+	 * any calls on the new stack, and this can be difficult to
+	 * ensure in a complex C function like __switch_to.
+	 */
+	FILL_RETURN_BUFFER %r8, RSB_FILL_LOOPS, X86_FEATURE_RETURN_UNDERFLOW
 	jmp	__switch_to
 END(__switch_to_asm)
 
-- 
2.14.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ