lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzZWzxkUvUc+3ufLahSsGi+BrAv7xQBGhZPCNuy-ZYji-w@mail.gmail.com>
Date: Tue, 28 Oct 2025 11:03:30 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Leon Hwang <leon.hwang@...ux.dev>
Cc: bpf@...r.kernel.org, ast@...nel.org, andrii@...nel.org, 
	daniel@...earbox.net, martin.lau@...ux.dev, eddyz87@...il.com, 
	song@...nel.org, yonghong.song@...ux.dev, john.fastabend@...il.com, 
	kpsingh@...nel.org, sdf@...ichev.me, haoluo@...gle.com, jolsa@...nel.org, 
	memxor@...il.com, linux-kernel@...r.kernel.org, kernel-patches-bot@...com
Subject: Re: [PATCH bpf v3 1/4] bpf: Free special fields when update
 [lru_,]percpu_hash maps

On Sun, Oct 26, 2025 at 8:40 AM Leon Hwang <leon.hwang@...ux.dev> wrote:
>
> As [lru_,]percpu_hash maps support BPF_KPTR_{REF,PERCPU}, missing
> calls to 'bpf_obj_free_fields()' in 'pcpu_copy_value()' could cause the
> memory referenced by BPF_KPTR_{REF,PERCPU} fields to be held until the
> map gets freed.
>
> Fix this by calling 'bpf_obj_free_fields()' after
> 'copy_map_value[,_long]()' in 'pcpu_copy_value()'.
>
> Fixes: 65334e64a493 ("bpf: Support kptrs in percpu hashmap and percpu LRU hashmap")
> Signed-off-by: Leon Hwang <leon.hwang@...ux.dev>
> ---
>  kernel/bpf/hashtab.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index c2fcd0cd51e51..26308adc9ccb3 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -950,12 +950,14 @@ static void pcpu_copy_value(struct bpf_htab *htab, void __percpu *pptr,
>         if (!onallcpus) {
>                 /* copy true value_size bytes */
>                 copy_map_value(&htab->map, this_cpu_ptr(pptr), value);
> +               bpf_obj_free_fields(htab->map.record, this_cpu_ptr(pptr));

would make sense to assign this_cpu_ptr() result in a local variable
and reuse it between copy_map_value and bpf_obj_free_fields().
Consider that for a follow up.


>         } else {
>                 u32 size = round_up(htab->map.value_size, 8);
>                 int off = 0, cpu;
>
>                 for_each_possible_cpu(cpu) {
>                         copy_map_value_long(&htab->map, per_cpu_ptr(pptr, cpu), value + off);
> +                       bpf_obj_free_fields(htab->map.record, per_cpu_ptr(pptr, cpu));
>                         off += size;
>                 }
>         }
> --
> 2.51.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ