lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260108133329.78a73fed@gandalf.local.home>
Date: Thu, 8 Jan 2026 13:33:29 -0500
From: Steven Rostedt <rostedt@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Masami Hiramatsu
 <mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
 Ben Dooks <ben.dooks@...ethink.co.uk>, Julia Lawall
 <Julia.Lawall@...ia.fr>, Wupeng Ma <mawupeng1@...wei.com>
Subject: [GIT PULL] tracing: Fixes for v6.19


Linus,

tracing fixes for v6.19:

- Remove useless assignment of soft_mode variable

  The function __ftrace_event_enable_disable() sets "soft_mode" in one of
  the branch paths but doesn't use it after that. Remove the setting of that
  variable.

- Add a cond_resched() in ring_buffer_resize()

  The resize function that allocates all the pages for the ring buffer was
  causing a soft lockup on PREEMPT_NONE configs when allocating large
  buffers on machines with many CPUs. Hopefully this is the last
  cond_resched() needed to be added as PREEMPT_LAZY becomes the norm in the
  future.

- Make ftrace_graph_ent depth field signed

  The "depth" field of struct ftrace_graph_ent was converted from "int" to
  "unsigned long" for alignment reasons to work with being embedded in other
  structures. The conversion from a signed to unsigned caused integrity
  checks to always pass as they were comparing "depth" to less than zero.
  Make the field signed long.

- Add recursion protection to stack trace events

  A infinite recursion was triggered by a stack trace event calling RCU
  which internally called rcu_read_unlock_special(), which triggered an
  event that was also doing stacktraces which cause it to trigger the same
  RCU lock that called rcu_read_unlock_special() again.

  Update the trace_test_and_set_recursion() to add a set of context checks
  for events to use, and have the stack trace event use that for recursion
  protection.

- Make the variable ftrace_dump_on_oops static

  The cleanup of sysctl that moved all the updates to the files that use
  them moved the reference of ftrace_dump_on_oops to where it is used.
  It is no longer used outside of the trace.c file. Make it static.


Please pull the latest trace-v6.19-rc4 tree, which can be found at:


  git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
trace-v6.19-rc4

Tag SHA1: 95c6df5f47e592a81fe0d78a16141b66158ca058
Head SHA1: 1e2ed4bfd50ace3c4272cfab7e9aa90956fb7ae0


Ben Dooks (1):
      trace: ftrace_dump_on_oops[] is not exported, make it static

Julia Lawall (1):
      tracing: Drop unneeded assignment to soft_mode

Steven Rostedt (2):
      ftrace: Make ftrace_graph_ent depth field signed
      tracing: Add recursion protection in kernel stack trace recording

Wupeng Ma (1):
      ring-buffer: Avoid softlockup in ring_buffer_resize() during memory free

----
 include/linux/ftrace.h          | 2 +-
 include/linux/trace_recursion.h | 9 +++++++++
 kernel/trace/ring_buffer.c      | 2 ++
 kernel/trace/trace.c            | 8 +++++++-
 kernel/trace/trace_events.c     | 7 +++----
 5 files changed, 22 insertions(+), 6 deletions(-)
---------------------------
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 770f0dc993cc..a3a8989e3268 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -1167,7 +1167,7 @@ static inline void ftrace_init(void) { }
  */
 struct ftrace_graph_ent {
 	unsigned long func; /* Current function */
-	unsigned long depth;
+	long depth; /* signed to check for less than zero */
 } __packed;
 
 /*
diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h
index ae04054a1be3..e6ca052b2a85 100644
--- a/include/linux/trace_recursion.h
+++ b/include/linux/trace_recursion.h
@@ -34,6 +34,13 @@ enum {
 	TRACE_INTERNAL_SIRQ_BIT,
 	TRACE_INTERNAL_TRANSITION_BIT,
 
+	/* Internal event use recursion bits */
+	TRACE_INTERNAL_EVENT_BIT,
+	TRACE_INTERNAL_EVENT_NMI_BIT,
+	TRACE_INTERNAL_EVENT_IRQ_BIT,
+	TRACE_INTERNAL_EVENT_SIRQ_BIT,
+	TRACE_INTERNAL_EVENT_TRANSITION_BIT,
+
 	TRACE_BRANCH_BIT,
 /*
  * Abuse of the trace_recursion.
@@ -58,6 +65,8 @@ enum {
 
 #define TRACE_LIST_START	TRACE_INTERNAL_BIT
 
+#define TRACE_EVENT_START	TRACE_INTERNAL_EVENT_BIT
+
 #define TRACE_CONTEXT_MASK	((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1)
 
 /*
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 41c9f5d079be..630221b00838 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -3137,6 +3137,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
 					list) {
 			list_del_init(&bpage->list);
 			free_buffer_page(bpage);
+
+			cond_resched();
 		}
 	}
  out_err_unlock:
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 6f2148df14d9..baec63134ab6 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -138,7 +138,7 @@ cpumask_var_t __read_mostly	tracing_buffer_mask;
  * by commas.
  */
 /* Set to string format zero to disable by default */
-char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0";
+static char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0";
 
 /* When set, tracing will stop when a WARN*() is hit */
 static int __disable_trace_on_warning;
@@ -3012,6 +3012,11 @@ static void __ftrace_trace_stack(struct trace_array *tr,
 	struct ftrace_stack *fstack;
 	struct stack_entry *entry;
 	int stackidx;
+	int bit;
+
+	bit = trace_test_and_set_recursion(_THIS_IP_, _RET_IP_, TRACE_EVENT_START);
+	if (bit < 0)
+		return;
 
 	/*
 	 * Add one, for this function and the call to save_stack_trace()
@@ -3080,6 +3085,7 @@ static void __ftrace_trace_stack(struct trace_array *tr,
 	/* Again, don't let gcc optimize things here */
 	barrier();
 	__this_cpu_dec(ftrace_stack_reserve);
+	trace_clear_recursion(bit);
 }
 
 static inline void ftrace_trace_stack(struct trace_array *tr,
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 76067529db61..137b4d9bb116 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -826,16 +826,15 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
 		 * When soft_disable is set and enable is set, we want to
 		 * register the tracepoint for the event, but leave the event
 		 * as is. That means, if the event was already enabled, we do
-		 * nothing (but set soft_mode). If the event is disabled, we
-		 * set SOFT_DISABLED before enabling the event tracepoint, so
-		 * it still seems to be disabled.
+		 * nothing. If the event is disabled, we set SOFT_DISABLED
+		 * before enabling the event tracepoint, so it still seems
+		 * to be disabled.
 		 */
 		if (!soft_disable)
 			clear_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags);
 		else {
 			if (atomic_inc_return(&file->sm_ref) > 1)
 				break;
-			soft_mode = true;
 			/* Enable use of trace_buffered_event */
 			trace_buffered_event_enable();
 		}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ