[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <166325187487.401.4017547159660113681.tip-bot2@tip-bot2>
Date: Thu, 15 Sep 2022 14:24:34 -0000
From: "tip-bot2 for Peter Zijlstra" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [tip: x86/core] x86,retpoline: Be sure to emit INT3 after JMP *%\reg
The following commit has been merged into the x86/core branch of tip:
Commit-ID: 8c03af3e090e9d57d90f482d344563dd4bae1e66
Gitweb: https://git.kernel.org/tip/8c03af3e090e9d57d90f482d344563dd4bae1e66
Author: Peter Zijlstra <peterz@...radead.org>
AuthorDate: Thu, 08 Sep 2022 12:04:50 +02:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Thu, 15 Sep 2022 16:13:53 +02:00
x86,retpoline: Be sure to emit INT3 after JMP *%\reg
Both AMD and Intel recommend using INT3 after an indirect JMP. Make sure
to emit one when rewriting the retpoline JMP irrespective of compiler
SLS options or even CONFIG_SLS.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Alexei Starovoitov <alexei.starovoitov@...il.com>
Link: https://lkml.kernel.org/r/Yxm+QkFPOhrVSH6q@hirez.programming.kicks-ass.net
---
arch/x86/kernel/alternative.c | 9 +++++++++
arch/x86/net/bpf_jit_comp.c | 4 +++-
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 62f6b8b..68d84cf 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -453,6 +453,15 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
return ret;
i += ret;
+ /*
+ * The compiler is supposed to EMIT an INT3 after every unconditional
+ * JMP instruction due to AMD BTC. However, if the compiler is too old
+ * or SLS isn't enabled, we still need an INT3 after indirect JMPs
+ * even on Intel.
+ */
+ if (op == JMP32_INSN_OPCODE && i < insn->length)
+ bytes[i++] = INT3_INSN_OPCODE;
+
for (; i < insn->length;)
bytes[i++] = BYTES_NOP1;
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index c1f6c1c..4922517 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -419,7 +419,9 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
OPTIMIZER_HIDE_VAR(reg);
emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
} else {
- EMIT2(0xFF, 0xE0 + reg);
+ EMIT2(0xFF, 0xE0 + reg); /* jmp *%\reg */
+ if (IS_ENABLED(CONFIG_RETPOLINE) || IS_ENABLED(CONFIG_SLS))
+ EMIT1(0xCC); /* int3 */
}
*pprog = prog;
Powered by blists - more mailing lists