[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9d93f63e-cbb3-405e-aa8c-d6ecf54d22b1@paulmck-laptop>
Date: Fri, 3 Oct 2025 01:07:15 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: rcu@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Kernel Team <kernel-team@...a.com>,
Steven Rostedt <rostedt@...dmis.org>,
Andrii Nakryiko <andrii@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
Peter Zijlstra <peterz@...radead.org>, bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH v2 08/21] rcu: Add noinstr-fast
rcu_read_{,un}lock_tasks_trace() APIs
On Thu, Oct 02, 2025 at 08:56:01AM -0700, Alexei Starovoitov wrote:
> On Thu, Oct 2, 2025 at 6:38 AM Paul E. McKenney <paulmck@...nel.org> wrote:
> >
> > On Wed, Oct 01, 2025 at 06:37:33PM -0700, Alexei Starovoitov wrote:
> > > On Wed, Oct 1, 2025 at 7:48 AM Paul E. McKenney <paulmck@...nel.org> wrote:
> > > >
> > > > +static inline struct srcu_ctr __percpu *rcu_read_lock_tasks_trace(void)
> > > > +{
> > > > + struct srcu_ctr __percpu *ret = __srcu_read_lock_fast(&rcu_tasks_trace_srcu_struct);
> > > > +
> > > > + rcu_try_lock_acquire(&rcu_tasks_trace_srcu_struct.dep_map);
> > > > + if (!IS_ENABLED(CONFIG_TASKS_TRACE_RCU_NO_MB))
> > > > + smp_mb(); // Provide ordering on noinstr-incomplete architectures.
> > > > + return ret;
> > > > +}
> > >
> > > ...
> > >
> > > > @@ -50,14 +97,15 @@ static inline void rcu_read_lock_trace(void)
> > > > {
> > > > struct task_struct *t = current;
> > > >
> > > > + rcu_try_lock_acquire(&rcu_tasks_trace_srcu_struct.dep_map);
> > > > if (t->trc_reader_nesting++) {
> > > > // In case we interrupted a Tasks Trace RCU reader.
> > > > - rcu_try_lock_acquire(&rcu_tasks_trace_srcu_struct.dep_map);
> > > > return;
> > > > }
> > > > barrier(); // nesting before scp to protect against interrupt handler.
> > > > - t->trc_reader_scp = srcu_read_lock_fast(&rcu_tasks_trace_srcu_struct);
> > > > - smp_mb(); // Placeholder for more selective ordering
> > > > + t->trc_reader_scp = __srcu_read_lock_fast(&rcu_tasks_trace_srcu_struct);
> > > > + if (!IS_ENABLED(CONFIG_TASKS_TRACE_RCU_NO_MB))
> > > > + smp_mb(); // Placeholder for more selective ordering
> > > > }
> > >
> > > Since srcu_fast() __percpu pointers must be incremented/decremented
> > > within the same task, should we expose "raw" rcu_read_lock_tasks_trace()
> > > at all?
> > > rcu_read_lock_trace() stashes that pointer within a task,
> > > so implementation guarantees that unlock will happen within the same task,
> > > while _tasks_trace() requires the user not to do stupid things.
> > >
> > > I guess it's fine to have both versions and the amount of copy paste
> > > seems justified, but I keep wondering.
> > > Especially since _tasks_trace() needs more work on bpf trampoline
> > > side to pass this pointer around from lock to unlock.
> > > We can add extra 8 bytes to struct bpf_tramp_run_ctx and save it there,
> > > but set/reset run_ctx operates on current anyway, so it's not clear
> > > which version will be faster. I suspect _trace() will be good enough.
> > > Especially since trc_reader_nesting is kinda an optimization.
> >
> > The idea is to convert callers and get rid of rcu_read_lock_trace()
> > in favor of rcu_read_lock_tasks_trace(), the reason being the slow
> > task_struct access on x86. But if the extra storage is an issue for
> > some use cases, we can keep both. In that case, I would of course reduce
> > the copy-pasta in a future patch.
>
> slow task_struct access on x86? That's news to me.
> Why is it slow?
> static __always_inline struct task_struct *get_current(void)
> {
> if (IS_ENABLED(CONFIG_USE_X86_SEG_SUPPORT))
> return this_cpu_read_const(const_current_task);
>
> return this_cpu_read_stable(current_task);
> }
>
>
> The former is used with gcc 14+ while later is with clang.
> I don't understand the difference between the two.
> I'm guessing gcc14+ can be optimized better within the function,
> but both look plenty fast.
>
> We need current access anyway for run_ctx.
Last I measured it, task_struct access was quite a bit slower than was
access to per-CPU variables. The assembly language was such that this
was unsurprising.
But maybe things have changed, and it certainly would be a good thing
if task_struct access had improved. Once I get done hammering it with
functional tests, I will of course do benchmarking and adjust as needed.
Thanx, Paul
Powered by blists - more mailing lists