lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sat, 28 Nov 2020 09:42:55 +0530
From:   Neeraj Upadhyay <neeraju@...eaurora.org>
To:     paulmck@...nel.org
Cc:     rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
        kernel-team@...com, mingo@...nel.org, jiangshanlai@...il.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
        rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
        fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org,
        kent.overstreet@...il.com
Subject: Re: [PATCH v2 tip/core/rcu 1/6] srcu: Make Tiny SRCU use multi-bit
 grace-period counter



On 11/28/2020 7:46 AM, Paul E. McKenney wrote:
> On Wed, Nov 25, 2020 at 10:03:26AM +0530, Neeraj Upadhyay wrote:
>>
>>
>> On 11/24/2020 10:48 AM, Neeraj Upadhyay wrote:
>>>
>>>
>>> On 11/24/2020 1:25 AM, Paul E. McKenney wrote:
>>>> On Mon, Nov 23, 2020 at 10:01:13AM +0530, Neeraj Upadhyay wrote:
>>>>> On 11/21/2020 6:29 AM, paulmck@...nel.org wrote:
>>>>>> From: "Paul E. McKenney" <paulmck@...nel.org>
>>>>>>
>>>>>> There is a need for a polling interface for SRCU grace periods.  This
>>>>>> polling needs to distinguish between an SRCU instance being idle on the
>>>>>> one hand or in the middle of a grace period on the other.  This commit
>>>>>> therefore converts the Tiny SRCU srcu_struct structure's srcu_idx from
>>>>>> a defacto boolean to a free-running counter, using the bottom bit to
>>>>>> indicate that a grace period is in progress.  The second-from-bottom
>>>>>> bit is thus used as the index returned by srcu_read_lock().
>>>>>>
>>>>>> Link:
>>>>>> https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/
>>>>>> Reported-by: Kent Overstreet <kent.overstreet@...il.com>
>>>>>> [ paulmck: Fix __srcu_read_lock() idx computation Neeraj per
>>>>>> Upadhyay. ]
>>>>>> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
>>>>>> ---
>>>>>>     include/linux/srcutiny.h | 4 ++--
>>>>>>     kernel/rcu/srcutiny.c    | 5 +++--
>>>>>>     2 files changed, 5 insertions(+), 4 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h
>>>>>> index 5a5a194..d9edb67 100644
>>>>>> --- a/include/linux/srcutiny.h
>>>>>> +++ b/include/linux/srcutiny.h
>>>>>> @@ -15,7 +15,7 @@
>>>>>>     struct srcu_struct {
>>>>>>         short srcu_lock_nesting[2];    /* srcu_read_lock()
>>>>>> nesting depth. */
>>>>>> -    short srcu_idx;            /* Current reader array element. */
>>>>>> +    unsigned short srcu_idx;    /* Current reader array
>>>>>> element in bit 0x2. */
>>>>>>         u8 srcu_gp_running;        /* GP workqueue running? */
>>>>>>         u8 srcu_gp_waiting;        /* GP waiting for readers? */
>>>>>>         struct swait_queue_head srcu_wq;
>>>>>> @@ -59,7 +59,7 @@ static inline int __srcu_read_lock(struct
>>>>>> srcu_struct *ssp)
>>>>>>     {
>>>>>>         int idx;
>>>>>> -    idx = READ_ONCE(ssp->srcu_idx);
>>>>>> +    idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
>>>>>>         WRITE_ONCE(ssp->srcu_lock_nesting[idx],
>>>>>> ssp->srcu_lock_nesting[idx] + 1);
>>>>>>         return idx;
>>>>>>     }
>>>>>
>>>>> Need change in idx calcultion in srcu_torture_stats_print() ?
>>>>>
>>>>> static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
>>>>>     idx = READ_ONCE(ssp->srcu_idx) & 0x1;
>>>>
>>>> Excellent point!  It should match the calculation in __srcu_read_lock(),
>>>> shouldn't it?  I have updated this, thank you!
>>>>
>>>>                              Thanx, Paul
>>>>
>>>
>>> Updated version looks good!
>>>
>>>
>>> Thanks
>>> Neeraj
>>>
>>
>> For the version in rcu -dev:
>>
>> Reviewed-by: Neeraj Upadhyay <neeraju@...eaurora.org>
> 
> I applied all of these, thank you very much!
> 

Welcome :)

>> Only minor point which I have is, the idx calculation can be made an inline
>> func (though srcu_drive_gp() does not require a READ_ONCE for ->srcu_idx):
>>
>> __srcu_read_lock() and srcu_torture_stats_print() are using
>>
>> idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
>>
>> whereas srcu_drive_gp() uses:
>>
>> idx = (ssp->srcu_idx & 0x2) / 2;
> 
> They do work on different elements of the various arrays.  Or do you
> believe that the srcu_drive_gp() use needs adjusting?

My bad, I missed that they are using different elements of array.
Please ignore this comment.


Thanks
Neeraj

> 
> Either way, the overhead of READ_ONCE() is absolutely not at all
> a problem.  Would you like to put together a patch so that I can see
> exactly what you are suggesting?
> 
> 							Thanx, Paul
> 
>> Thanks
>> Neeraj
>>
>>>>> Thanks
>>>>> Neeraj
>>>>>
>>>>>> diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
>>>>>> index 6208c1d..5598cf6 100644
>>>>>> --- a/kernel/rcu/srcutiny.c
>>>>>> +++ b/kernel/rcu/srcutiny.c
>>>>>> @@ -124,11 +124,12 @@ void srcu_drive_gp(struct work_struct *wp)
>>>>>>         ssp->srcu_cb_head = NULL;
>>>>>>         ssp->srcu_cb_tail = &ssp->srcu_cb_head;
>>>>>>         local_irq_enable();
>>>>>> -    idx = ssp->srcu_idx;
>>>>>> -    WRITE_ONCE(ssp->srcu_idx, !ssp->srcu_idx);
>>>>>> +    idx = (ssp->srcu_idx & 0x2) / 2;
>>>>>> +    WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
>>>>>>         WRITE_ONCE(ssp->srcu_gp_waiting, true);  /*
>>>>>> srcu_read_unlock() wakes! */
>>>>>>         swait_event_exclusive(ssp->srcu_wq,
>>>>>> !READ_ONCE(ssp->srcu_lock_nesting[idx]));
>>>>>>         WRITE_ONCE(ssp->srcu_gp_waiting, false); /*
>>>>>> srcu_read_unlock() cheap. */
>>>>>> +    WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1);
>>>>>>         /* Invoke the callbacks we removed above. */
>>>>>>         while (lh) {
>>>>>>
>>>>>
>>>>> -- 
>>>>> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is
>>>>> a member of
>>>>> the Code Aurora Forum, hosted by The Linux Foundation
>>>
>>
>> -- 
>> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of
>> the Code Aurora Forum, hosted by The Linux Foundation

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member of the Code Aurora Forum, hosted by The Linux Foundation

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ