[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260112103612.41dd4f03@gandalf.local.home>
Date: Mon, 12 Jan 2026 10:36:12 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Alexei Starovoitov
<alexei.starovoitov@...il.com>, LKML <linux-kernel@...r.kernel.org>, Linux
trace kernel <linux-trace-kernel@...r.kernel.org>, bpf
<bpf@...r.kernel.org>, Masami Hiramatsu <mhiramat@...nel.org>, "Paul E.
McKenney" <paulmck@...nel.org>, Sebastian Andrzej Siewior
<bigeasy@...utronix.de>, Thomas Gleixner <tglx@...utronix.de>, Linus
Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v5] tracing: Guard __DECLARE_TRACE() use of
__DO_TRACE_CALL() with SRCU-fast
On Mon, 12 Jan 2026 16:31:28 +0100
Peter Zijlstra <peterz@...radead.org> wrote:
> > OUCH! So migrate disable/enable has a much larger overhead when executed in
> > a module than in the kernel? This means all spin_locks() in modules
> > converted to mutexes in PREEMPT_RT are taking this hit!
>
> Not so, the migrate_disable() for PREEMPT_RT is still in core code --
> kernel/locking/spinlock_rt.c is very much not build as a module.
True. But still, wouldn't it be cleaner to have that variable separate from
the run queue and make the code a bit simpler?
As now it doesn't look like it will even bother tracing, as it appears that
only BPF would need it. So this would just be a clean up.
-- Steve
Powered by blists - more mailing lists