[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180108235126.28736-1-andi@firstfloor.org>
Date: Mon, 8 Jan 2018 15:51:26 -0800
From: Andi Kleen <andi@...stfloor.org>
To: dwmw2@...radead.org
Cc: pjt@...gle.com, linux-kernel@...r.kernel.org,
torvalds@...ux-foundation.org, gregkh@...ux-foundation.org,
tim.c.chen@...ux.intel.com, dave.hansen@...el.com,
tglx@...utronix.de, peterz@...radead.org, luto@...capital.net,
Andi Kleen <ak@...ux.intel.com>
Subject: [PATCH] x86/retpoline: Also fill return buffer after idle
From: Andi Kleen <ak@...ux.intel.com>
This is an extension of the earlier patch to fill the return buffer
on context switch. It uses the assembler macros added earlier.
When we go into deeper idle states the return buffer could be cleared
in MWAIT, but then another thread which wakes up earlier might
be poisoning the indirect branch predictor. Then when the return
buffer underflows there might an uncontrolled indirect branch.
To guard against this always fill the return buffer when exiting idle.
Needed on Skylake and some Broadwells.
Signed-off-by: Andi Kleen <ak@...ux.intel.com>
---
arch/x86/entry/entry_32.S | 8 ++++++++
arch/x86/entry/entry_64.S | 8 ++++++++
arch/x86/include/asm/mwait.h | 11 ++++++++++-
3 files changed, 26 insertions(+), 1 deletion(-)
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 7dee84a3cf83..2687cce8a02e 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -1092,3 +1092,11 @@ ENTRY(rewind_stack_do_exit)
call do_exit
1: jmp 1b
END(rewind_stack_do_exit)
+
+ENTRY(fill_return_buffer)
+#ifdef CONFIG_RETPOLINE
+ ALTERNATIVE "ret", "", X86_FEATURE_RETPOLINE
+ FILL_RETURN_BUFFER
+#endif
+ ret
+END(fill_return_buffer)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index a33033e2bfe0..92fbec1b0eb5 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1831,3 +1831,11 @@ ENTRY(rewind_stack_do_exit)
call do_exit
END(rewind_stack_do_exit)
+
+ENTRY(fill_return_buffer)
+#ifdef CONFIG_RETPOLINE
+ ALTERNATIVE "ret", "", X86_FEATURE_RETPOLINE
+ FILL_RETURN_BUFFER
+#endif
+ ret
+END(fill_return_buffer)
diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
index 39a2fb29378a..1d9f9269b5e7 100644
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -87,6 +87,8 @@ static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
:: "a" (eax), "c" (ecx));
}
+extern __visible void fill_return_buffer(void);
+
/*
* This uses new MONITOR/MWAIT instructions on P4 processors with PNI,
* which can obviate IPI to trigger checking of need_resched.
@@ -107,8 +109,15 @@ static inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
}
__monitor((void *)¤t_thread_info()->flags, 0, 0);
- if (!need_resched())
+ if (!need_resched()) {
__mwait(eax, ecx);
+ /*
+ * idle could have cleared the return buffer,
+ * so fill it to prevent uncontrolled
+ * speculation.
+ */
+ fill_return_buffer();
+ }
}
current_clr_polling();
}
--
2.14.3
Powered by blists - more mailing lists