[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEXW_YQFqW2QcAuHZEhc_GaUaB-=QOS0WgUOizd=FYwtFQ8vag@mail.gmail.com>
Date: Fri, 19 May 2023 14:52:50 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: SeongJae Park <sj@...nel.org>
Cc: paulmck@...nel.org, corbet@....net, rcu@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release'
in insert function
On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@...nel.org> wrote:
>
> The document says we can avoid extra smp_rmb() in lockless_lookup() and
> extra _release() in insert function when hlist_nulls is used. However,
> the example code snippet for the insert function is still using the
> extra _release(). Drop it.
>
> Signed-off-by: SeongJae Park <sj@...nel.org>
> ---
> Documentation/RCU/rculist_nulls.rst | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
> index 5cd6f3f8810f..463270273d89 100644
> --- a/Documentation/RCU/rculist_nulls.rst
> +++ b/Documentation/RCU/rculist_nulls.rst
> @@ -191,7 +191,7 @@ scan the list again without harm.
> obj = kmem_cache_alloc(cachep);
> lock_chain(); // typically a spin_lock()
> obj->key = key;
> - atomic_set_release(&obj->refcnt, 1); // key before refcnt
> + atomic_set(&obj->refcnt, 1);
> /*
> * insert obj in RCU way (readers might be traversing chain)
> */
If write to ->refcnt of 1 is reordered with setting of ->key, what
prevents the 'lookup algorithm' from doing a key match (obj->key ==
key) before the refcount has been initialized?
Are we sure the reordering mentioned in the document is the same as
the reordering prevented by the atomic_set_release()?
For the other 3 patches, feel free to add:
Reviewed-by: Joel Fernandes (Google) <joel@...lfernandes.org>
thanks,
- Joel
Powered by blists - more mailing lists