[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMB2axPTU+HsJ_6nKDaq8xnGhcoXZCgy=X2wiODYNbZMdRkSHg@mail.gmail.com>
Date: Tue, 5 Aug 2025 09:25:09 -0700
From: Amery Hung <ameryhung@...il.com>
To: Martin KaFai Lau <martin.lau@...ux.dev>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org, alexei.starovoitov@...il.com,
andrii@...nel.org, daniel@...earbox.net, memxor@...il.com, kpsingh@...nel.org,
martin.lau@...nel.org, yonghong.song@...ux.dev, song@...nel.org,
haoluo@...gle.com, kernel-team@...a.com
Subject: Re: [RFC PATCH bpf-next v1 03/11] bpf: Open code bpf_selem_unlink_storage
in bpf_selem_unlink
On Fri, Aug 1, 2025 at 5:58 PM Martin KaFai Lau <martin.lau@...ux.dev> wrote:
>
> On 7/29/25 11:25 AM, Amery Hung wrote:
> > void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool reuse_now)
> > {
> > + struct bpf_local_storage_map *storage_smap;
> > + struct bpf_local_storage *local_storage = NULL;
> > + bool bpf_ma, free_local_storage = false;
> > + HLIST_HEAD(selem_free_list);
> > struct bpf_local_storage_map_bucket *b;
> > - struct bpf_local_storage_map *smap;
> > - unsigned long flags;
> > + struct bpf_local_storage_map *smap = NULL;
> > + unsigned long flags, b_flags;
> >
> > if (likely(selem_linked_to_map_lockless(selem))) {
>
> Can we simplify the bpf_selem_unlink() function by skipping this map_lockless
> check,
>
> > smap = rcu_dereference_check(SDATA(selem)->smap, bpf_rcu_lock_held());
> > b = select_bucket(smap, selem);
> > - raw_spin_lock_irqsave(&b->lock, flags);
> > + }
> >
> > - /* Always unlink from map before unlinking from local_storage
> > - * because selem will be freed after successfully unlinked from
> > - * the local_storage.
> > - */
> > - bpf_selem_unlink_map_nolock(selem);
> > - raw_spin_unlock_irqrestore(&b->lock, flags);
> > + if (likely(selem_linked_to_storage_lockless(selem))) {
>
> only depends on this and then proceed to take the lock_storage->lock. Then
> recheck selem_linked_to_storage(selem), bpf_selem_unlink_map(selem) first, and
> then bpf_selem_unlink_storage_nolock(selem) last.
Thanks for the suggestion. I think it will simplify the function. Just
making sure I am getting you right, you mean instead of open code both
unlink_map and unlink_storage, only open code unlink_storage. First,
grab local_storage->lock and call bpf_selem_unlink_map(). Then, only
proceed to unlink_storage only If bpf_selem_unlink_map() succeeds.
>
> Then bpf_selem_unlink_map can use selem->local_storage->owner to select_bucket().
Not sure what this part mean. Could you elaborate?
>
> > + local_storage = rcu_dereference_check(selem->local_storage,
> > + bpf_rcu_lock_held());
> > + storage_smap = rcu_dereference_check(local_storage->smap,
> > + bpf_rcu_lock_held());
> > + bpf_ma = check_storage_bpf_ma(local_storage, storage_smap, selem);
> > }
> >
> > - bpf_selem_unlink_storage(selem, reuse_now);
> > + if (local_storage)
> > + raw_spin_lock_irqsave(&local_storage->lock, flags);
> > + if (smap)
> > + raw_spin_lock_irqsave(&b->lock, b_flags);
> > +
> > + /* Always unlink from map before unlinking from local_storage
> > + * because selem will be freed after successfully unlinked from
> > + * the local_storage.
> > + */
> > + if (smap)
> > + bpf_selem_unlink_map_nolock(selem);
> > + if (local_storage && likely(selem_linked_to_storage(selem)))
> > + free_local_storage = bpf_selem_unlink_storage_nolock(
> > + local_storage, selem, true, &selem_free_list);
> > +
> > + if (smap)
> > + raw_spin_unlock_irqrestore(&b->lock, b_flags);
> > + if (local_storage)
> > + raw_spin_unlock_irqrestore(&local_storage->lock, flags);
> > +
> > + bpf_selem_free_list(&selem_free_list, reuse_now);
> > +
> > + if (free_local_storage)
> > + bpf_local_storage_free(local_storage, storage_smap, bpf_ma, reuse_now);
> > }
> >
> > void __bpf_local_storage_insert_cache(struct bpf_local_storage *local_storage,
>
Powered by blists - more mailing lists