[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250928084641.7f90db4f@batman.local.home>
Date: Sun, 28 Sep 2025 08:46:41 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Masami Hiramatsu
<mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Mark Rutland <mark.rutland@....com>, Wang Liang <wangliang74@...wei.com>
Subject: [GIT PULL] tracing: Fixes for v6.17
Linus,
tracing fixes for v6.17
- Fix buffer overflow in osnoise_cpu_write()
The allocated buffer to read user space did not add a nul terminating byte
after copying from user the string. It then reads the string, and if user
space did not add a nul byte, the read will continue beyond the string.
Add a nul terminating byte after reading the string.
- Fix missing check for lockdown on tracing
There's a path from kprobe events or uprobe events that can update the
tracing system even if lockdown on tracing is activate. Add a check in the
dynamic event path.
- Add a recursion check for the function graph return path
Now that fprobes can hook to the function graph tracer and call different
code between the entry and the exit, the exit code may now call functions
that are not called in entry. This means that the exit handler can possibly
trigger recursion that is not caught and cause the system to crash.
Add the same recursion checks in the function exit handler as exists in the
entry handler path.
Please pull the latest trace-v6.17-rc7 tree, which can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
trace-v6.17-rc7
Tag SHA1: 4bafc20386cc1bdcbd421fdf2e171b3943547b5b
Head SHA1: 0db0934e7f9bb624ed98a665890dbe249f65b8fd
Masami Hiramatsu (Google) (2):
tracing: dynevent: Add a missing lockdown check on dynevent
tracing: fgraph: Protect return handler from recursion loop
Wang Liang (1):
tracing/osnoise: Fix slab-out-of-bounds in _parse_integer_limit()
----
kernel/trace/fgraph.c | 12 ++++++++++++
kernel/trace/trace_dynevent.c | 4 ++++
kernel/trace/trace_osnoise.c | 3 ++-
3 files changed, 18 insertions(+), 1 deletion(-)
---------------------------
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index 1e3b32b1e82c..484ad7a18463 100644
--- a/kernel/trace/fgraph.c
+++ b/kernel/trace/fgraph.c
@@ -815,6 +815,7 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
unsigned long bitmap;
unsigned long ret;
int offset;
+ int bit;
int i;
ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer, &offset);
@@ -829,6 +830,15 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
if (fregs)
ftrace_regs_set_instruction_pointer(fregs, ret);
+ bit = ftrace_test_recursion_trylock(trace.func, ret);
+ /*
+ * This can fail because ftrace_test_recursion_trylock() allows one nest
+ * call. If we are already in a nested call, then we don't probe this and
+ * just return the original return address.
+ */
+ if (unlikely(bit < 0))
+ goto out;
+
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
trace.retval = ftrace_regs_get_return_value(fregs);
#endif
@@ -852,6 +862,8 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
}
}
+ ftrace_test_recursion_unlock(bit);
+out:
/*
* The ftrace_graph_return() may still access the current
* ret_stack structure, we need to make sure the update of
diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
index 5d64a18cacac..d06854bd32b3 100644
--- a/kernel/trace/trace_dynevent.c
+++ b/kernel/trace/trace_dynevent.c
@@ -230,6 +230,10 @@ static int dyn_event_open(struct inode *inode, struct file *file)
{
int ret;
+ ret = security_locked_down(LOCKDOWN_TRACEFS);
+ if (ret)
+ return ret;
+
ret = tracing_check_open_get_tr(NULL);
if (ret)
return ret;
diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
index 337bc0eb5d71..dc734867f0fc 100644
--- a/kernel/trace/trace_osnoise.c
+++ b/kernel/trace/trace_osnoise.c
@@ -2325,12 +2325,13 @@ osnoise_cpus_write(struct file *filp, const char __user *ubuf, size_t count,
if (count < 1)
return 0;
- buf = kmalloc(count, GFP_KERNEL);
+ buf = kmalloc(count + 1, GFP_KERNEL);
if (!buf)
return -ENOMEM;
if (copy_from_user(buf, ubuf, count))
return -EFAULT;
+ buf[count] = '\0';
if (!zalloc_cpumask_var(&osnoise_cpumask_new, GFP_KERNEL))
return -ENOMEM;
Powered by blists - more mailing lists