lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fa4fd174-065d-4a04-b080-ffe04d31313f@nvidia.com>
Date: Wed, 23 Jul 2025 12:10:46 -0400
From: Joel Fernandes <joelagnelf@...dia.com>
To: paulmck@...nel.org
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org, kernel-team@...a.com,
 rostedt@...dmis.org, bpf@...r.kernel.org
Subject: Re: [PATCH v3 5/4] srcu: Document __srcu_read_{,un}lock_fast()
 implicit RCU readers



On 7/23/2025 9:32 AM, joelagnelf@...dia.com wrote:
> 
> 
>> On Jul 22, 2025, at 6:17 PM, Paul E. McKenney <paulmck@...nel.org> wrote:
>>
>> This commit documents the implicit RCU readers that are implied by the
>> this_cpu_inc() and atomic_long_inc() operations in __srcu_read_lock_fast()
>> and __srcu_read_unlock_fast().  While in the area, fix the documentation
>> of the memory pairing of atomic_long_inc() in __srcu_read_lock_fast().
> 
> Just to clarify, the implication here is since SRCU-fast uses synchronize_rcu on the update side, these operations result in blocking of classical RCU too. So simply using srcu fast is another way of achieving the previously used pre-empt-disabling in the use cases.

Hi Paul, it was nice sync'ing with you off-list. Following are my suggestions
and where I am coming from:

1. For someone who doesn't know SRCU-fast depends on synchronize_rcu (me after a
few beers :P), the word 'RCU' in the comment you added to this patch, might come
across as 'which RCU are we referring to - SRCU or classical RCU or some other'.
So I would call it 'classical RCU reader' in the comment.

2. It would be good to call out specifically that, the SRCU-fast critical
section is akin to a classical RCU reader, because of its implementation's
dependence on synchronize_rcu() to overcome the lack of read-side memory barriers.

3. I think since the potential size of these code comment suggestions, it may
make sense to provide a bigger comment suggesting these than providing them
inline as you did. And also calling out the tracing usecase in the comments for
additional usecase clarification.

I could provide a patch to do all this soon, as we discussed, as well (unless
you're Ok with making this change as well).

Thanks!

 - Joel




> 
> Or is the rationale for this something else?
> 
> I would probably spell this out more in a longer comment above the if/else, than modify the inline comments.
> 
> But I am probably misunderstood the whole thing. :-(
> 
> -Joel
> 
>>
>> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
>> Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
>> Cc: Steven Rostedt <rostedt@...dmis.org>
>> Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
>> Cc: <bpf@...r.kernel.org>
>>
>> diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
>> index 043b5a67ef71e..78e1a7b845ba9 100644
>> --- a/include/linux/srcutree.h
>> +++ b/include/linux/srcutree.h
>> @@ -245,9 +245,9 @@ static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct
>>    struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
>>
>>    if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
>> -        this_cpu_inc(scp->srcu_locks.counter); /* Y */
>> +        this_cpu_inc(scp->srcu_locks.counter); // Y, and implicit RCU reader.
>>    else
>> -        atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks));  /* Z */
>> +        atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks));  // Y, and implicit RCU reader.
>>    barrier(); /* Avoid leaking the critical section. */
>>    return scp;
>> }
>> @@ -271,9 +271,9 @@ static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_
>> {
>>    barrier();  /* Avoid leaking the critical section. */
>>    if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
>> -        this_cpu_inc(scp->srcu_unlocks.counter);  /* Z */
>> +        this_cpu_inc(scp->srcu_unlocks.counter);  // Z, and implicit RCU reader.
>>    else
>> -        atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks));  /* Z */
>> +        atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks));  // Z, and implicit RCU reader.
>> }
>>
>> void __srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor);
>>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ