lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Tue, 17 Nov 2020 20:10:48 -0800
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Xin Yin <yinxin_1989@...yun.com>
Cc:     Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        Andrii Nakryiko <andriin@...com>,
        Network Development <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] bpf:Fix update dirty data in lru percpu hash maps

On Tue, Nov 10, 2020 at 1:04 AM Xin Yin <yinxin_1989@...yun.com> wrote:
>
> For lru_percpu_map update elem, prealloc_lru_pop() may return
> an unclear elem, if the func called by bpf prog and "onallcpus"
> set to false, it may update an elem whith dirty data.
>
> Clear percpu value of the elem, before use it.
>
> Signed-off-by: Xin Yin <yinxin_1989@...yun.com>

The alternative fix commit d3bec0138bfb ("bpf: Zero-fill re-used
per-cpu map element")
was already merged.
Please double check that it fixes your test.

> ---
>  kernel/bpf/hashtab.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index 728ffec52cf3..b1f781ec20b6 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -709,6 +709,16 @@ static void pcpu_copy_value(struct bpf_htab *htab, void __percpu *pptr,
>         }
>  }
>
> +static void pcpu_init_value(struct bpf_htab *htab, void __percpu *pptr)
> +{
> +       u32 size = round_up(htab->map.value_size, 8);
> +       int cpu;
> +
> +       for_each_possible_cpu(cpu) {
> +               memset(per_cpu_ptr(pptr, cpu), 0, size);
> +       }
> +}
> +
>  static bool fd_htab_map_needs_adjust(const struct bpf_htab *htab)
>  {
>         return htab->map.map_type == BPF_MAP_TYPE_HASH_OF_MAPS &&
> @@ -1075,6 +1085,9 @@ static int __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
>                 pcpu_copy_value(htab, htab_elem_get_ptr(l_old, key_size),
>                                 value, onallcpus);
>         } else {
> +               if (!onallcpus)
> +                       pcpu_init_value(htab,
> +                                       htab_elem_get_ptr(l_new, key_size));
>                 pcpu_copy_value(htab, htab_elem_get_ptr(l_new, key_size),
>                                 value, onallcpus);
>                 hlist_nulls_add_head_rcu(&l_new->hash_node, head);
> --
> 2.19.5
>

Powered by blists - more mailing lists