[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190426001143.4983-13-namit@vmware.com>
Date: Thu, 25 Apr 2019 17:11:32 -0700
From: Nadav Amit <namit@...are.com>
To: Peter Zijlstra <peterz@...radead.org>,
Borislav Petkov <bp@...en8.de>,
Andy Lutomirski <luto@...nel.org>,
Ingo Molnar <mingo@...hat.com>
CC: <linux-kernel@...r.kernel.org>, <x86@...nel.org>, <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Nadav Amit <nadav.amit@...il.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
<linux_dti@...oud.com>, <linux-integrity@...r.kernel.org>,
<linux-security-module@...r.kernel.org>,
<akpm@...ux-foundation.org>, <kernel-hardening@...ts.openwall.com>,
<linux-mm@...ck.org>, <will.deacon@....com>,
<ard.biesheuvel@...aro.org>, <kristen@...ux.intel.com>,
<deneen.t.dock@...el.com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Nadav Amit <namit@...are.com>,
Kees Cook <keescook@...omium.org>,
Dave Hansen <dave.hansen@...el.com>,
Masami Hiramatsu <mhiramat@...nel.org>
Subject: [PATCH v5 12/23] x86/jump-label: Remove support for custom poker
There are only two types of poking: early and breakpoint based. The use
of a function pointer to perform poking complicates the code and is
probably inefficient due to the use of indirect branches.
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Kees Cook <keescook@...omium.org>
Cc: Dave Hansen <dave.hansen@...el.com>
Cc: Masami Hiramatsu <mhiramat@...nel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Nadav Amit <namit@...are.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
---
arch/x86/kernel/jump_label.c | 26 ++++++++++----------------
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index e7d8c636b228..e631c358f7f4 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -37,7 +37,6 @@ static void bug_at(unsigned char *ip, int line)
static void __ref __jump_label_transform(struct jump_entry *entry,
enum jump_label_type type,
- void *(*poker)(void *, const void *, size_t),
int init)
{
union jump_code_union jmp;
@@ -50,14 +49,6 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
jmp.offset = jump_entry_target(entry) -
(jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
- /*
- * As long as only a single processor is running and the code is still
- * not marked as RO, text_poke_early() can be used; Checking that
- * system_state is SYSTEM_BOOTING guarantees it.
- */
- if (system_state == SYSTEM_BOOTING)
- poker = text_poke_early;
-
if (type == JUMP_LABEL_JMP) {
if (init) {
expect = default_nop; line = __LINE__;
@@ -80,16 +71,19 @@ static void __ref __jump_label_transform(struct jump_entry *entry,
bug_at((void *)jump_entry_code(entry), line);
/*
- * Make text_poke_bp() a default fallback poker.
+ * As long as only a single processor is running and the code is still
+ * not marked as RO, text_poke_early() can be used; Checking that
+ * system_state is SYSTEM_BOOTING guarantees it. It will be set to
+ * SYSTEM_SCHEDULING before other cores are awaken and before the
+ * code is write-protected.
*
* At the time the change is being done, just ignore whether we
* are doing nop -> jump or jump -> nop transition, and assume
* always nop being the 'currently valid' instruction
- *
*/
- if (poker) {
- (*poker)((void *)jump_entry_code(entry), code,
- JUMP_LABEL_NOP_SIZE);
+ if (init || system_state == SYSTEM_BOOTING) {
+ text_poke_early((void *)jump_entry_code(entry), code,
+ JUMP_LABEL_NOP_SIZE);
return;
}
@@ -101,7 +95,7 @@ void arch_jump_label_transform(struct jump_entry *entry,
enum jump_label_type type)
{
mutex_lock(&text_mutex);
- __jump_label_transform(entry, type, NULL, 0);
+ __jump_label_transform(entry, type, 0);
mutex_unlock(&text_mutex);
}
@@ -131,5 +125,5 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
jlstate = JL_STATE_NO_UPDATE;
}
if (jlstate == JL_STATE_UPDATE)
- __jump_label_transform(entry, type, text_poke_early, 1);
+ __jump_label_transform(entry, type, 1);
}
--
2.17.1
Powered by blists - more mailing lists