lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-2995590964da93e1fd9a91550f9c9d9fab28f160@git.kernel.org>
Date:   Tue, 18 Jul 2017 03:41:21 -0700
From:   tip-bot for Andy Lutomirski <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     jpoimboe@...hat.com, jslaby@...e.cz, torvalds@...ux-foundation.org,
        bp@...en8.de, linux-kernel@...r.kernel.org, tglx@...utronix.de,
        mingo@...nel.org, hpa@...or.com, luto@...nel.org,
        peterz@...radead.org, brgerst@...il.com, efault@....de,
        dvlasenk@...hat.com
Subject: [tip:x86/asm] x86/entry/64: Initialize the top of the IRQ stack
 before switching stacks

Commit-ID:  2995590964da93e1fd9a91550f9c9d9fab28f160
Gitweb:     http://git.kernel.org/tip/2995590964da93e1fd9a91550f9c9d9fab28f160
Author:     Andy Lutomirski <luto@...nel.org>
AuthorDate: Tue, 11 Jul 2017 10:33:39 -0500
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Tue, 18 Jul 2017 10:56:23 +0200

x86/entry/64: Initialize the top of the IRQ stack before switching stacks

The OOPS unwinder wants the word at the top of the IRQ stack to
point back to the previous stack at all times when the IRQ stack
is in use.  There's currently a one-instruction window in ENTER_IRQ_STACK
during which this isn't the case.  Fix it by writing the old RSP to the
top of the IRQ stack before jumping.

This currently writes the pointer to the stack twice, which is a bit
ugly.  We could get rid of this by replacing irq_stack_ptr with
irq_stack_ptr_minus_eight (better name welcome).  OTOH, there may be
all kinds of odd microarchitectural considerations in play that
affect performance by a few cycles here.

Reported-by: Mike Galbraith <efault@....de>
Reported-by: Josh Poimboeuf <jpoimboe@...hat.com>
Signed-off-by: Andy Lutomirski <luto@...nel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Brian Gerst <brgerst@...il.com>
Cc: Denys Vlasenko <dvlasenk@...hat.com>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Jiri Slaby <jslaby@...e.cz>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: live-patching@...r.kernel.org
Link: http://lkml.kernel.org/r/aae7e79e49914808440ad5310ace138ced2179ca.1499786555.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 arch/x86/entry/entry_64.S | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 0d4483a..b56f7f2 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -469,6 +469,7 @@ END(irq_entries_start)
 	DEBUG_ENTRY_ASSERT_IRQS_OFF
 	movq	%rsp, \old_rsp
 	incl	PER_CPU_VAR(irq_count)
+	jnz	.Lirq_stack_push_old_rsp_\@
 
 	/*
 	 * Right now, if we just incremented irq_count to zero, we've
@@ -478,9 +479,30 @@ END(irq_entries_start)
 	 * it must be *extremely* careful to limit its stack usage.  This
 	 * could include kprobes and a hypothetical future IST-less #DB
 	 * handler.
+	 *
+	 * The OOPS unwinder relies on the word at the top of the IRQ
+	 * stack linking back to the previous RSP for the entire time we're
+	 * on the IRQ stack.  For this to work reliably, we need to write
+	 * it before we actually move ourselves to the IRQ stack.
+	 */
+
+	movq	\old_rsp, PER_CPU_VAR(irq_stack_union + IRQ_STACK_SIZE - 8)
+	movq	PER_CPU_VAR(irq_stack_ptr), %rsp
+
+#ifdef CONFIG_DEBUG_ENTRY
+	/*
+	 * If the first movq above becomes wrong due to IRQ stack layout
+	 * changes, the only way we'll notice is if we try to unwind right
+	 * here.  Assert that we set up the stack right to catch this type
+	 * of bug quickly.
 	 */
+	cmpq	-8(%rsp), \old_rsp
+	je	.Lirq_stack_okay\@
+	ud2
+	.Lirq_stack_okay\@:
+#endif
 
-	cmovzq	PER_CPU_VAR(irq_stack_ptr), %rsp
+.Lirq_stack_push_old_rsp_\@:
 	pushq	\old_rsp
 .endm
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ