[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170524182547.5c085dc7@vmware.local.home>
Date: Wed, 24 May 2017 18:25:47 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Kees Cook <keescook@...omium.org>,
LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
Masami Hiramatsu <mhiramat@...nel.org>,
"Luis R. Rodriguez" <mcgrof@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] x86/ftrace: Make sure that ftrace trampolines are not
RWX
On Wed, 24 May 2017 21:13:27 +0200 (CEST)
Thomas Gleixner <tglx@...utronix.de> wrote:
> > Oops: 0003 [#1] SMP
> > Modules linked in:
> > CPU: 3 PID: 1 Comm: swapper/0 Not tainted 4.12.0-rc2-test+ #42
> > Hardware name: MSI MS-7823/CSM-H87M-G43 (MS-7823), BIOS V1.6
> > 02/22/2014 task: ffff8802153a8000 task.stack: ffffc90000c74000
> > RIP: 0010:new_slab+0x1e8/0x2b4
> > RSP: 0000:ffffc90000c77b28 EFLAGS: 00010282
> > RAX: 0000000040040000 RBX: ffff880216003f00 RCX: ffff880214f5c058
> > RDX: 0000000000000000 RSI: ffff880214f5c000 RDI: ffff880216003f00
> > RBP: ffffc90000c77b70 R08: 000000000000002a R09: 0000000000000000
> > R10: 00000000000201e2 R11: 0000000000020190 R12: ffff880214f5c000
> > R13: 000000000000002e R14: 0000000000000001 R15: ffffea000853d700
> > FS: 0000000000000000(0000) GS:ffff88021eb80000(0000)
> > knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0:
> > 0000000080050033 CR2: ffff880214f5c000 CR3: 000000000221d000 CR4:
> > 00000000001406e0 Call Trace:
> > ? interleave_nodes+0x29/0x40
> > ___slab_alloc+0x2e8/0x49e
>
> That does not make any sense, but I'm digging into it.
The trampolines uses the module allocation, and it appears, that needs
to become rw before freeing again.
I applied this patch, and it appears to fix the bug for me.
Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
-- Steve
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 663a35d..5e93a9a 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -689,8 +689,12 @@ static inline void *alloc_tramp(unsigned long size)
{
return module_alloc(size);
}
-static inline void tramp_free(void *tramp)
+static inline void tramp_free(void *tramp, int size)
{
+ int npages;
+
+ npages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ set_memory_rw((unsigned long)tramp, npages);
module_memfree(tramp);
}
#else
@@ -699,7 +703,7 @@ static inline void *alloc_tramp(unsigned long size)
{
return NULL;
}
-static inline void tramp_free(void *tramp) { }
+static inline void tramp_free(void *tramp, int size) { }
#endif
/* Defined as markers to the end of the ftrace default trampolines */
@@ -771,7 +775,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
/* Copy ftrace_caller onto the trampoline memory */
ret = probe_kernel_read(trampoline, (void *)start_offset, size);
if (WARN_ON(ret < 0)) {
- tramp_free(trampoline);
+ tramp_free(trampoline, *tramp_size);
return 0;
}
@@ -797,7 +801,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
/* Are we pointing to the reference? */
if (WARN_ON(memcmp(op_ptr.op, op_ref, 3) != 0)) {
- tramp_free(trampoline);
+ tramp_free(trampoline, *tramp_size);
return 0;
}
@@ -943,7 +947,7 @@ void arch_ftrace_trampoline_free(struct ftrace_ops *ops)
if (!ops || !(ops->flags & FTRACE_OPS_FL_ALLOC_TRAMP))
return;
- tramp_free((void *)ops->trampoline);
+ tramp_free((void *)ops->trampoline, ops->trampoline_size);
ops->trampoline = 0;
}
Powered by blists - more mailing lists