lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <F61F0ED7-D82B-412F-971F-691856539B6C@gmail.com>
Date:   Sat, 10 Jun 2023 19:04:44 +0800
From:   Alan Huang <mmpgouride@...il.com>
To:     paulmck@...nel.org
Cc:     SeongJae Park <sj@...nel.org>,
        Joel Fernandes <joel@...lfernandes.org>, corbet@....net,
        rcu@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release'
 in insert function


> 2023年6月10日 13:37,Alan Huang <mmpgouride@...il.com> 写道:
> 
> Hi Paul,
> 
>> 2023年6月10日 07:42,Paul E. McKenney <paulmck@...nel.org> 写道:
>> 
>> On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote:
>>> On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <joel@...lfernandes.org> wrote:
>>> 
>>>> On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@...nel.org> wrote:
>>>>> 
>>>>> The document says we can avoid extra smp_rmb() in lockless_lookup() and
>>>>> extra _release() in insert function when hlist_nulls is used.  However,
>>>>> the example code snippet for the insert function is still using the
>>>>> extra _release().  Drop it.
>>>>> 
>>>>> Signed-off-by: SeongJae Park <sj@...nel.org>
>>>>> ---
>>>>> Documentation/RCU/rculist_nulls.rst | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>> 
>>>>> diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
>>>>> index 5cd6f3f8810f..463270273d89 100644
>>>>> --- a/Documentation/RCU/rculist_nulls.rst
>>>>> +++ b/Documentation/RCU/rculist_nulls.rst
>>>>> @@ -191,7 +191,7 @@ scan the list again without harm.
>>>>>  obj = kmem_cache_alloc(cachep);
>>>>>  lock_chain(); // typically a spin_lock()
>>>>>  obj->key = key;
>>>>> -  atomic_set_release(&obj->refcnt, 1); // key before refcnt
>>>>> +  atomic_set(&obj->refcnt, 1);
>>>>>  /*
>>>>>   * insert obj in RCU way (readers might be traversing chain)
>>>>>   */
>>>> 
>>>> If write to ->refcnt of 1 is reordered with setting of ->key, what
>>>> prevents the 'lookup algorithm' from doing a key match (obj->key ==
>>>> key) before the refcount has been initialized?
>>>> 
>>>> Are we sure the reordering mentioned in the document is the same as
>>>> the reordering prevented by the atomic_set_release()?
>>> 
>>> Paul, may I ask your opinion?
>> 
>> The next line of code is this:
>> 
>> hlist_nulls_add_head_rcu(&obj->obj_node, list);
>> 
>> If I understand the code correctly, obj (and thus *obj) are not
>> visible to readers before the hlist_nulls_add_head_rcu().  And
>> hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that
>> initialization (including both ->key and ->refcnt) is ordered before
>> list insertion.
>> 
>> Except that this memory is being allocated from a slab cache that was
>> created with SLAB_TYPESAFE_BY_RCU.  This means that there can be readers
>> who gained a reference before this object was freed, and who still hold
>> their references.
>> 
>> Unfortunately, the implementation of try_get_ref() is not shown.  However,
>> if ->refcnt is non-zero, this can succeed, and if it succeeds, we need
>> the subsequent check of obj->key with key in the lookup algorithm to
>> be stable.  For this check to be stable, try_get_ref() needs to use an
>> atomic operation with at least acquire semantics (kref_get_unless_zero()
>> would work), and this must pair with something in the initialization.
>> 
>> So I don't see how it is safe to weaken that atomic_set_release() to
>> atomic_set(), even on x86.
> 
> I totally agree, but only in the case of using hlist_nulls.
> 
> That means, atomic_set_release() is not enough in the case without using hlist_nulls,
> we must ensure that storing to obj->next (in hlist_add_head_rcu) is ordered before storing

Typo: not before, but after.

> to obj->key. Otherwise, we can get the new ‘next' and the old ‘key' in which case we can’t detect
> an object movement(from one chain to another).
> 
> So, I’m afraid that the atomic_set_release() in insertion algorithm without using hlist_nulls should 
> change back to:
> 
> smp_wmb();
> atomic_set(&obj->refcnt, 1);
> 
> Thanks,
> Alan
> 
>> 
>> Or am I missing something subtle here?
>> 
>> Thanx, Paul
>> 
>>> Thanks,
>>> SJ
>>> 
>>>> 
>>>> For the other 3 patches, feel free to add:
>>>> Reviewed-by: Joel Fernandes (Google) <joel@...lfernandes.org>
>>>> 
>>>> thanks,
>>>> 
>>>> - Joel


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ