[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAVpQUCoCizxTm6wRs0+n6_kPK+kgxwszsYKNds3YvuBfBvrhg@mail.gmail.com>
Date: Tue, 16 Sep 2025 21:27:32 -0700
From: Kuniyuki Iwashima <kuniyu@...gle.com>
To: luoxuanqiang <xuanqiang.luo@...ux.dev>
Cc: edumazet@...gle.com, kerneljasonxing@...il.com, davem@...emloft.net,
kuba@...nel.org, netdev@...r.kernel.org,
Xuanqiang Luo <luoxuanqiang@...inos.cn>
Subject: Re: [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu()
and hlist_nulls_replace_init_rcu()
On Tue, Sep 16, 2025 at 8:27 PM luoxuanqiang <xuanqiang.luo@...ux.dev> wrote:
>
>
> 在 2025/9/17 02:58, Kuniyuki Iwashima 写道:
> > On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@...ux.dev> wrote:
> >> From: Xuanqiang Luo <luoxuanqiang@...inos.cn>
> >>
> >> Add two functions to atomically replace RCU-protected hlist_nulls entries.
> >>
> >> Signed-off-by: Xuanqiang Luo <luoxuanqiang@...inos.cn>
> >> ---
> >> include/linux/rculist_nulls.h | 61 +++++++++++++++++++++++++++++++++++
> >> 1 file changed, 61 insertions(+)
> >>
> >> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
> >> index 89186c499dd4..8ed604f65a3e 100644
> >> --- a/include/linux/rculist_nulls.h
> >> +++ b/include/linux/rculist_nulls.h
> >> @@ -152,6 +152,67 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
> >> n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
> >> }
> >>
> >> +/**
> >> + * __hlist_nulls_replace_rcu - replace an old entry by a new one
> >> + * @old: the element to be replaced
> >> + * @new: the new element to insert
> >> + *
> >> + * Description:
> >> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> >> + * permitting racing traversals.
> >> + *
> >> + * The caller must take whatever precautions are necessary (such as holding
> >> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> >> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> >> + * list. However, it is perfectly legal to run concurrently with the _rcu
> >> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> >> + */
> >> +static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
> >> + struct hlist_nulls_node *new)
> >> +{
> >> + struct hlist_nulls_node *next = old->next;
> >> +
> >> + new->next = next;
>
> Do we need to use WRITE_ONCE() here, as mentioned in efd04f8a8b45
> ("rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls")?
> I am more inclined to think that it is necessary.
Good point, then WRITE_ONCE() makes sense.
>
> >> + WRITE_ONCE(new->pprev, old->pprev);
> > As you don't use WRITE_ONCE() for ->next, the new node must
> > not be published yet, so WRITE_ONCE() is unnecessary for ->pprev
> > too.
>
> I noticed that point. My understanding is that using WRITE_ONCE()
> for new->pprev follows the approach in hlist_replace_rcu() to
> match the READ_ONCE() in hlist_nulls_unhashed_lockless() and
> hlist_unhashed_lockless().
Using WRITE_ONCE() or READ_ONCE() implies lockless readers
or writers elsewhere.
sk_hashed() does not use the lockless version, and I think it's
always called under lock_sock() or bh_. Perhaps run kernel
w/ KCSAN and see if it complains.
[ It seems hlist_nulls_unhashed_lockless is not used at all and
hlist_unhashed_lockless() is only used by bpf and timer code. ]
That said, it might be fair to use WRITE_ONCE() here to make
future users less error-prone.
>
> >
> >> + rcu_assign_pointer(*(struct hlist_nulls_node __rcu **)new->pprev, new);
> >> + if (!is_a_nulls(next))
> >> + WRITE_ONCE(new->next->pprev, &new->next);
> >> +}
> >> +
> >> +/**
> >> + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and
> >> + * initialize the old
> >> + * @old: the element to be replaced
> >> + * @new: the new element to insert
> >> + *
> >> + * Description:
> >> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> >> + * permitting racing traversals, and reinitialize the old entry.
> >> + *
> >> + * Return: true if the old entry was hashed and was replaced successfully, false
> >> + * otherwise.
> >> + *
> >> + * Note: hlist_nulls_unhashed() on the old node returns true after this.
> >> + * It is useful for RCU based read lockfree traversal if the writer side must
> >> + * know if the list entry is still hashed or already unhashed.
> >> + *
> >> + * The caller must take whatever precautions are necessary (such as holding
> >> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> >> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> >> + * list. However, it is perfectly legal to run concurrently with the _rcu
> >> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> >> + */
> >> +static inline bool hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old,
> >> + struct hlist_nulls_node *new)
> >> +{
> >> + if (!hlist_nulls_unhashed(old)) {
> > As mentioned in v1, this check is redundant.
>
> Apologies for bringing this up again. My understanding is that
> replacing a node requires checking if the old node is unhashed.
Only if the caller does not check it.
__sk_nulls_replace_node_init_rcu() has already checked
sk_hashed(old), which is !hlist_nulls_unhashed(old), no ?
__sk_nulls_replace_node_init_rcu(struct sock *old, ...)
if (sk_hashed(old))
hlist_nulls_replace_init_rcu(&old->sk_nulls_node, ...)
if (!hlist_nulls_unhashed(old))
>
> If so, we need a return value to inform the caller that the
> replace operation would fail.
>
> >
> >> + __hlist_nulls_replace_rcu(old, new);
> >> + WRITE_ONCE(old->pprev, NULL);
> >> + return true;
> >> + }
> >> + return false;
> >> +}
> >> +
> >> /**
> >> * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type
> >> * @tpos: the type * to use as a loop cursor.
> >> --
> >> 2.25.1
> >>
Powered by blists - more mailing lists