[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50c4aa37-388b-449c-8184-00a9d69471fc@paulmck-laptop>
Date: Tue, 20 Jun 2023 15:26:44 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Alan Huang <mmpgouride@...il.com>
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] rcu: Add necessary WRITE_ONCE()
On Tue, Jun 20, 2023 at 05:13:46PM +0000, Alan Huang wrote:
> Commit c54a2744497d("list: Add hlist_unhashed_lockless()") and
> commit 860c8802ace1("rcu: Use WRITE_ONCE() for assignments to
> ->pprev for hlist_nulls") added various WRITE_ONCE() to pair with
> the READ_ONCE() in hlist_unhashed_lockless(), but there are still
> some places where WRITE_ONCE() was not added, this commit adds that.
>
> Also add WRITE_ONCE() to pair with the READ_ONCE() in hlist_empty().
>
> Signed-off-by: Alan Huang <mmpgouride@...il.com>
On hlist_nulls_add_tail_rcu(), good catch, thank you!
On the others, are there really cases where a lockless read races with
the update? At first glance, that sounds like a usage bug. For example,
as I understand it, when you use something like hlist_del(), you are
supposed to ensure that there are no concurrent readers. Which is the
point of the assignment of the special value LIST_POISON2, right?
Or is there some use case that I am missing?
If I am not missing something, then switching the non-RCU APIs to
WRITE_ONCE() would be a step backwards, because it would make it harder
for tools like KCSAN to find bugs.
Thanx, Paul
> ---
> Changelog:
> V1 -> V2:
> Add WRITE_ONCE in hlist_del_init to pair with READ_ONCE in
> hlist_unhashed_lockless.
>
> include/linux/list.h | 9 +++++----
> include/linux/list_nulls.h | 2 +-
> include/linux/rculist_nulls.h | 2 +-
> 3 files changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/list.h b/include/linux/list.h
> index ac366958ea..3a29b95bfe 100644
> --- a/include/linux/list.h
> +++ b/include/linux/list.h
> @@ -912,7 +912,7 @@ static inline void hlist_del(struct hlist_node *n)
> {
> __hlist_del(n);
> n->next = LIST_POISON1;
> - n->pprev = LIST_POISON2;
> + WRITE_ONCE(n->pprev, LIST_POISON2);
> }
>
> /**
> @@ -925,7 +925,8 @@ static inline void hlist_del_init(struct hlist_node *n)
> {
> if (!hlist_unhashed(n)) {
> __hlist_del(n);
> - INIT_HLIST_NODE(n);
> + n->next = NULL;
> + WRITE_ONCE(n->pprev, NULL);
> }
> }
>
> @@ -1026,8 +1027,8 @@ static inline void hlist_move_list(struct hlist_head *old,
> {
> new->first = old->first;
> if (new->first)
> - new->first->pprev = &new->first;
> - old->first = NULL;
> + WRITE_ONCE(new->first->pprev, &new->first);
> + WRITE_ONCE(old->first, NULL);
> }
>
> #define hlist_entry(ptr, type, member) container_of(ptr,type,member)
> diff --git a/include/linux/list_nulls.h b/include/linux/list_nulls.h
> index fa6e8471bd..b63b0589fa 100644
> --- a/include/linux/list_nulls.h
> +++ b/include/linux/list_nulls.h
> @@ -95,7 +95,7 @@ static inline void hlist_nulls_add_head(struct hlist_nulls_node *n,
>
> n->next = first;
> WRITE_ONCE(n->pprev, &h->first);
> - h->first = n;
> + WRITE_ONCE(h->first, n);
> if (!is_a_nulls(first))
> WRITE_ONCE(first->pprev, &n->next);
> }
> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
> index ba4c00dd80..c65121655b 100644
> --- a/include/linux/rculist_nulls.h
> +++ b/include/linux/rculist_nulls.h
> @@ -138,7 +138,7 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n,
>
> if (last) {
> n->next = last->next;
> - n->pprev = &last->next;
> + WRITE_ONCE(n->pprev, &last->next);
> rcu_assign_pointer(hlist_nulls_next_rcu(last), n);
> } else {
> hlist_nulls_add_head_rcu(n, h);
> --
> 2.34.1
>
Powered by blists - more mailing lists