[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMB2axPx2RajLzhoOsnffhrOxkw7Zy=D=vHam_Y_5wKS0cqf0g@mail.gmail.com>
Date: Mon, 27 Oct 2025 10:04:47 -0700
From: Amery Hung <ameryhung@...il.com>
To: Leon Hwang <leon.hwang@...ux.dev>
Cc: bpf@...r.kernel.org, ast@...nel.org, andrii@...nel.org,
daniel@...earbox.net, martin.lau@...ux.dev, eddyz87@...il.com,
song@...nel.org, yonghong.song@...ux.dev, john.fastabend@...il.com,
kpsingh@...nel.org, sdf@...ichev.me, haoluo@...gle.com, jolsa@...nel.org,
memxor@...il.com, linux-kernel@...r.kernel.org, kernel-patches-bot@...com
Subject: Re: [PATCH bpf v3 3/4] bpf: Free special fields when update local
storage maps
On Mon, Oct 27, 2025 at 9:15 AM Leon Hwang <leon.hwang@...ux.dev> wrote:
>
> Hi Amery,
>
> On 2025/10/27 23:44, Amery Hung wrote:
> > On Sun, Oct 26, 2025 at 8:41 AM Leon Hwang <leon.hwang@...ux.dev> wrote:
> >>
> >> When updating local storage maps with BPF_F_LOCK on the fast path, the
> >> special fields were not freed after being replaced. This could cause
> >> memory referenced by BPF_KPTR_{REF,PERCPU} fields to be held until the
> >> map gets freed.
> >>
> >> Similarly, on the other path, the old sdata's special fields were never
> >> freed regardless of whether BPF_F_LOCK was used, causing the same issue.
> >>
> >> Fix this by calling 'bpf_obj_free_fields()' after
> >> 'copy_map_value_locked()' to properly release the old fields.
> >>
> >> Fixes: 9db44fdd8105 ("bpf: Support kptrs in local storage maps")
> >> Signed-off-by: Leon Hwang <leon.hwang@...ux.dev>
> >> ---
> >> kernel/bpf/bpf_local_storage.c | 3 +++
> >> 1 file changed, 3 insertions(+)
> >>
> >> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> >> index b931fbceb54da..8e3aea4e07c50 100644
> >> --- a/kernel/bpf/bpf_local_storage.c
> >> +++ b/kernel/bpf/bpf_local_storage.c
> >> @@ -609,6 +609,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
> >> if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) {
> >> copy_map_value_locked(&smap->map, old_sdata->data,
> >> value, false);
> >> + bpf_obj_free_fields(smap->map.record, old_sdata->data);
> >
> > [ ... ]
> >
> >> return old_sdata;
> >> }
> >> }
> >> @@ -641,6 +642,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
> >> if (old_sdata && (map_flags & BPF_F_LOCK)) {
> >> copy_map_value_locked(&smap->map, old_sdata->data, value,
> >> false);
> >> + bpf_obj_free_fields(smap->map.record, old_sdata->data);
> >
> > The one above and this make sense. Thanks for fixing it.
> >
>
> Thanks for your review.
>
> >> selem = SELEM(old_sdata);
> >> goto unlock;
> >> }
> >> @@ -654,6 +656,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
> >>
> >> /* Third, remove old selem, SELEM(old_sdata) */
> >> if (old_sdata) {
> >> + bpf_obj_free_fields(smap->map.record, old_sdata->data);
> >
> > Is this really needed? bpf_selem_free_list() later should free special
> > fields in this selem.
> >
>
> Yes, it’s needed. The new selftest confirms that the special fields are
> not freed when updating a local storage map.
>
Hmmm. I don't think so.
> Also, bpf_selem_unlink_storage_nolock() doesn’t invoke
> bpf_selem_free_list(), unlike bpf_selem_unlink_storage(). So we need to
> call bpf_obj_free_fields() here explicitly to free those fields.
>
bpf_selem_unlink_storage_nolock() unlinks the old selem and adds it to
old_selem_free_list. Later, bpf_selem_free_list() will call
bpf_selem_free() to free selem in bpf_selem_free_list, which should
also free special fields in the selem.
The selftests may have checked the refcount before an task trace RCU
gp and thought it is a leak. I added a 300ms delay before the checking
program runs and the test did not detect any leak even without this
specific bpf_obj_free_fields().
> Thanks,
> Leon
>
> [...]
>
>
Powered by blists - more mailing lists