[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9FAE52D6-D073-43D9-93D3-3E49006943B2@nvidia.com>
Date: Wed, 23 Jul 2025 13:32:50 +0000
From: Joel Fernandes <joelagnelf@...dia.com>
To: "paulmck@...nel.org" <paulmck@...nel.org>
CC: "rcu@...r.kernel.org" <rcu@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kernel-team@...a.com" <kernel-team@...a.com>, "rostedt@...dmis.org"
<rostedt@...dmis.org>, "bpf@...r.kernel.org" <bpf@...r.kernel.org>
Subject: Re: [PATCH v3 5/4] srcu: Document __srcu_read_{,un}lock_fast()
implicit RCU readers
> On Jul 22, 2025, at 6:17 PM, Paul E. McKenney <paulmck@...nel.org> wrote:
>
> This commit documents the implicit RCU readers that are implied by the
> this_cpu_inc() and atomic_long_inc() operations in __srcu_read_lock_fast()
> and __srcu_read_unlock_fast(). While in the area, fix the documentation
> of the memory pairing of atomic_long_inc() in __srcu_read_lock_fast().
Just to clarify, the implication here is since SRCU-fast uses synchronize_rcu on the update side, these operations result in blocking of classical RCU too. So simply using srcu fast is another way of achieving the previously used pre-empt-disabling in the use cases.
Or is the rationale for this something else?
I would probably spell this out more in a longer comment above the if/else, than modify the inline comments.
But I am probably misunderstood the whole thing. :-(
-Joel
>
> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
> Cc: Steven Rostedt <rostedt@...dmis.org>
> Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> Cc: <bpf@...r.kernel.org>
>
> diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
> index 043b5a67ef71e..78e1a7b845ba9 100644
> --- a/include/linux/srcutree.h
> +++ b/include/linux/srcutree.h
> @@ -245,9 +245,9 @@ static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct
> struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
>
> if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
> - this_cpu_inc(scp->srcu_locks.counter); /* Y */
> + this_cpu_inc(scp->srcu_locks.counter); // Y, and implicit RCU reader.
> else
> - atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); /* Z */
> + atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); // Y, and implicit RCU reader.
> barrier(); /* Avoid leaking the critical section. */
> return scp;
> }
> @@ -271,9 +271,9 @@ static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_
> {
> barrier(); /* Avoid leaking the critical section. */
> if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
> - this_cpu_inc(scp->srcu_unlocks.counter); /* Z */
> + this_cpu_inc(scp->srcu_unlocks.counter); // Z, and implicit RCU reader.
> else
> - atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks)); /* Z */
> + atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks)); // Z, and implicit RCU reader.
> }
>
> void __srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor);
>
Powered by blists - more mailing lists