[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251114160017.CrDJHi5w@linutronix.de>
Date: Fri, 14 Nov 2025 17:00:17 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Stephen Rothwell <sfr@...b.auug.org.au>,
"Paul E. McKenney" <paulmck@...nel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
Uladzislau Rezki <urezki@...il.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Next Mailing List <linux-next@...r.kernel.org>
Subject: Re: linux-next: manual merge of the rcu tree with the ftrace tree
On 2025-11-14 10:46:33 [-0500], Steven Rostedt wrote:
> On Fri, 14 Nov 2025 14:35:32 +0100
> Sebastian Andrzej Siewior <bigeasy@...utronix.de> wrote:
>
> > On 2025-11-14 07:42:55 [-0500], Steven Rostedt wrote:
> > > > diff --cc kernel/trace/trace_syscalls.c
> > > > index e96d0063cbcf,3f699b198c56..000000000000
> > > > --- a/kernel/trace/trace_syscalls.c
> > > > +++ b/kernel/trace/trace_syscalls.c
> > > > @@@ -878,6 -322,8 +890,7 @@@ static void ftrace_syscall_enter(void *
> > > > * buffer and per-cpu data require preemption to be disabled.
> > > > */
> > > > might_fault();
> > > > + preempt_rt_guard();
> > > > - guard(preempt_notrace)();
> > >
> > > My code made it so that preemption is not needed here but is moved later
> > > down for the logic that does the reading of user space data.
> > >
> > > Note, it must have preemption disabled for all configs (including RT).
> > > Otherwise, the data it has can get corrupted.
> > >
> > > Paul, can you change it so that you *do not* touch this file?
> >
> > Where is preempt_rt_guard() from?
>
> Ah, it's from the patch I submitted that has this:
>
> +/*
> + * When PREEMPT_RT is enabled, it disables migration instead
> + * of preemption. The pseudo syscall trace events need to match
> + * so that the counter logic recorded into he ring buffer by
> + * trace_event_buffer_reserve() still matches what it expects.
> + */
> +#ifdef CONFIG_PREEMPT_RT
> +# define preempt_rt_guard() guard(migrate)()
> +#else
> +# define preempt_rt_guard()
> +#endif
> +
>
> I must be getting old, as I forgot I wrote this :-p
>
> I only saw the update from Stephen and thought it was disabling preemption.
but having both is kind of gross. Also the mapping from
preempt_rt_guard() to guard(migrate)() only on RT is kind of far.
> It doesn't disable preemption, but is here to keep the latency
> preempt_count counting the same in both PREEMPT_RT and non PREEMPT_RT. You
> know, the stuff that shows up in the trace:
>
> "d..4."
urgh.
We did that to match the reality with the tracer. Since the tracer
disabled preemption we decremented the counter from preempt_count to
record what was there before the trace point started changing it.
That was tracing_gen_ctx_dec(). Now I see we have something similar in
tracing_gen_ctx_dec_cond().
But why do we need to disable migration here? Why isn't !RT affected by
this. I remember someone had a trace where the NMI was set and migrate
disable was at max which sounds like someone decremented the
migrate_disable counter while migration wasn't disabled…
> Paul, never mind, this code will not affect the code I added.
>
> -- Steve
Sebastian
Powered by blists - more mailing lists