[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <6C469091-6C20-4BBD-B503-F024021C8AE7@gmail.com>
Date: Sat, 10 Jun 2023 13:52:25 +0800
From: Alan Huang <mmpgouride@...il.com>
To: SeongJae Park <sj@...nel.org>
Cc: "Paul E. McKenney" <paulmck@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>, corbet@....net,
rcu@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release'
in insert function
Hi SJ,
> 2023年6月10日 08:20,SeongJae Park <sj@...nel.org> 写道:
>
> On Fri, 9 Jun 2023 16:42:59 -0700 "Paul E. McKenney" <paulmck@...nel.org> wrote:
>
>> On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote:
>>> On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <joel@...lfernandes.org> wrote:
>>>
>>>> On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@...nel.org> wrote:
>>>>>
>>>>> The document says we can avoid extra smp_rmb() in lockless_lookup() and
>>>>> extra _release() in insert function when hlist_nulls is used. However,
>>>>> the example code snippet for the insert function is still using the
>>>>> extra _release(). Drop it.
>>>>>
>>>>> Signed-off-by: SeongJae Park <sj@...nel.org>
>>>>> ---
>>>>> Documentation/RCU/rculist_nulls.rst | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
>>>>> index 5cd6f3f8810f..463270273d89 100644
>>>>> --- a/Documentation/RCU/rculist_nulls.rst
>>>>> +++ b/Documentation/RCU/rculist_nulls.rst
>>>>> @@ -191,7 +191,7 @@ scan the list again without harm.
>>>>> obj = kmem_cache_alloc(cachep);
>>>>> lock_chain(); // typically a spin_lock()
>>>>> obj->key = key;
>>>>> - atomic_set_release(&obj->refcnt, 1); // key before refcnt
>>>>> + atomic_set(&obj->refcnt, 1);
>>>>> /*
>>>>> * insert obj in RCU way (readers might be traversing chain)
>>>>> */
>>>>
>>>> If write to ->refcnt of 1 is reordered with setting of ->key, what
>>>> prevents the 'lookup algorithm' from doing a key match (obj->key ==
>>>> key) before the refcount has been initialized?
>>>>
>>>> Are we sure the reordering mentioned in the document is the same as
>>>> the reordering prevented by the atomic_set_release()?
>>>
>>> Paul, may I ask your opinion?
>>
>> The next line of code is this:
>>
>> hlist_nulls_add_head_rcu(&obj->obj_node, list);
>>
>> If I understand the code correctly, obj (and thus *obj) are not
>> visible to readers before the hlist_nulls_add_head_rcu(). And
>> hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that
>> initialization (including both ->key and ->refcnt) is ordered before
>> list insertion.
>>
>> Except that this memory is being allocated from a slab cache that was
>> created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers
>> who gained a reference before this object was freed, and who still hold
>> their references.
>>
>> Unfortunately, the implementation of try_get_ref() is not shown. However,
>> if ->refcnt is non-zero, this can succeed, and if it succeeds, we need
>> the subsequent check of obj->key with key in the lookup algorithm to
>> be stable. For this check to be stable, try_get_ref() needs to use an
>> atomic operation with at least acquire semantics (kref_get_unless_zero()
>> would work), and this must pair with something in the initialization.
>>
>> So I don't see how it is safe to weaken that atomic_set_release() to
>> atomic_set(), even on x86.
>
> Thank you for the nice explanation, and I agree.
>
>>
>> Or am I missing something subtle here?
>
> I found the text is saying extra _release() in insert function is not
> needed[1], and I thought it means the atomic_set_release(). Am I misreading
> it? If not, would it be better to fix the text, for example, like below?
The original text is:
“With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
and extra smp_wmb() in insert function.”
We can avoid the extra smp_wmb(), but the _release is required,
As Paul said,
>> Except that this memory is being allocated from a slab cache that was
>> created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers
>> who gained a reference before this object was freed, and who still hold
>> their references.
Without the _release, we can get the old ‘key’ after the invocation of
try_get_ref (although try_get_ref noticed the effect of atomic_set).
Thanks,
Alan
>
> ```
> --- a/Documentation/RCU/rculist_nulls.rst
> +++ b/Documentation/RCU/rculist_nulls.rst
> @@ -129,8 +129,7 @@ very very fast (before the end of RCU grace period)
> Avoiding extra smp_rmb()
> ========================
>
> -With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
> -and extra _release() in insert function.
> +With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup().
>
> For example, if we choose to store the slot number as the 'nulls'
> end-of-list marker for each slot of the hash table, we can detect
> @@ -182,6 +181,9 @@ scan the list again without harm.
> 2) Insert algorithm
> -------------------
>
> +Same to the above one, but uses hlist_nulls_add_head_rcu() instead of
> +hlist_add_head_rcu().
> +
> ::
>
> /*
> @@ -191,7 +193,7 @@ scan the list again without harm.
> obj = kmem_cache_alloc(cachep);
> lock_chain(); // typically a spin_lock()
> obj->key = key;
> - atomic_set_release(&obj->refcnt, 1); // key before refcnt
> + atomic_set(&obj->refcnt, 1);
> /*
> * insert obj in RCU way (readers might be traversing chain)
> */
> ```
>
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/RCU/rculist_nulls.rst#n133
>
>
> Thanks,
> SJ
>
>>
>> Thanx, Paul
>>
>>> Thanks,
>>> SJ
>>>
>>>>
>>>> For the other 3 patches, feel free to add:
>>>> Reviewed-by: Joel Fernandes (Google) <joel@...lfernandes.org>
>>>>
>>>> thanks,
>>>>
>>>> - Joel
Powered by blists - more mailing lists