[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190426001143.4983-11-namit@vmware.com>
Date: Thu, 25 Apr 2019 17:11:30 -0700
From: Nadav Amit <namit@...are.com>
To: Peter Zijlstra <peterz@...radead.org>,
Borislav Petkov <bp@...en8.de>,
Andy Lutomirski <luto@...nel.org>,
Ingo Molnar <mingo@...hat.com>
CC: <linux-kernel@...r.kernel.org>, <x86@...nel.org>, <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Nadav Amit <nadav.amit@...il.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
<linux_dti@...oud.com>, <linux-integrity@...r.kernel.org>,
<linux-security-module@...r.kernel.org>,
<akpm@...ux-foundation.org>, <kernel-hardening@...ts.openwall.com>,
<linux-mm@...ck.org>, <will.deacon@....com>,
<ard.biesheuvel@...aro.org>, <kristen@...ux.intel.com>,
<deneen.t.dock@...el.com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Nadav Amit <namit@...are.com>
Subject: [PATCH v5 10/23] x86/kprobes: Set instruction page as executable
Set the page as executable after allocation. This patch is a
preparatory patch for a following patch that makes module allocated
pages non-executable.
While at it, do some small cleanup of what appears to be unnecessary
masking.
Acked-by: Masami Hiramatsu <mhiramat@...nel.org>
Signed-off-by: Nadav Amit <namit@...are.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
---
arch/x86/kernel/kprobes/core.c | 24 ++++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index a034cb808e7e..1591852d3ac4 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -431,8 +431,20 @@ void *alloc_insn_page(void)
void *page;
page = module_alloc(PAGE_SIZE);
- if (page)
- set_memory_ro((unsigned long)page & PAGE_MASK, 1);
+ if (!page)
+ return NULL;
+
+ /*
+ * First make the page read-only, and only then make it executable to
+ * prevent it from being W+X in between.
+ */
+ set_memory_ro((unsigned long)page, 1);
+
+ /*
+ * TODO: Once additional kernel code protection mechanisms are set, ensure
+ * that the page was not maliciously altered and it is still zeroed.
+ */
+ set_memory_x((unsigned long)page, 1);
return page;
}
@@ -440,8 +452,12 @@ void *alloc_insn_page(void)
/* Recover page to RW mode before releasing it */
void free_insn_page(void *page)
{
- set_memory_nx((unsigned long)page & PAGE_MASK, 1);
- set_memory_rw((unsigned long)page & PAGE_MASK, 1);
+ /*
+ * First make the page non-executable, and only then make it writable to
+ * prevent it from being W+X in between.
+ */
+ set_memory_nx((unsigned long)page, 1);
+ set_memory_rw((unsigned long)page, 1);
module_memfree(page);
}
--
2.17.1
Powered by blists - more mailing lists