[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <68a800e0afa0ca6797358cd8b5b12394eac89fdc.1580713729.git.christophe.leroy@c-s.fr>
Date: Mon, 3 Feb 2020 07:11:56 +0000 (UTC)
From: Christophe Leroy <christophe.leroy@....fr>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>, ruscur@...sell.cc
Cc: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: [PATCH v3 2/7] powerpc/kprobes: Mark newly allocated probes as RO
With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one
W+X page at boot by default. This can be tested with
CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
kernel log during boot.
powerpc doesn't implement its own alloc() for kprobes like other
architectures do, but we couldn't immediately mark RO anyway since we do
a memcpy to the page we allocate later. After that, nothing should be
allowed to modify the page, and write permissions are removed well
before the kprobe is armed.
The memcpy() would fail if >1 probes were allocated, so use
patch_instruction() instead which is safe for RO.
Reviewed-by: Daniel Axtens <dja@...ens.net>
Signed-off-by: Russell Currey <ruscur@...sell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@....fr>
---
v3: copied alloc_insn_page() from arm64, set_memory_ro() is now called there.
v2: removed the redundant flush
---
arch/powerpc/kernel/kprobes.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 2d27ec4feee4..bfab91ded234 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -24,6 +24,8 @@
#include <asm/sstep.h>
#include <asm/sections.h>
#include <linux/uaccess.h>
+#include <linux/set_memory.h>
+#include <linux/vmalloc.h>
DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
@@ -102,6 +104,16 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
return addr;
}
+void *alloc_insn_page(void)
+{
+ void *page = vmalloc_exec(PAGE_SIZE);
+
+ if (page)
+ set_memory_ro((unsigned long)page, 1);
+
+ return page;
+}
+
int arch_prepare_kprobe(struct kprobe *p)
{
int ret = 0;
@@ -124,11 +136,8 @@ int arch_prepare_kprobe(struct kprobe *p)
}
if (!ret) {
- memcpy(p->ainsn.insn, p->addr,
- MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+ patch_instruction(p->ainsn.insn, *p->addr);
p->opcode = *p->addr;
- flush_icache_range((unsigned long)p->ainsn.insn,
- (unsigned long)p->ainsn.insn + sizeof(kprobe_opcode_t));
}
p->ainsn.boostable = 0;
--
2.25.0
Powered by blists - more mailing lists