lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 27 Oct 2020 15:55:21 -0700
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     David Verbeiren <david.verbeiren@...sares.net>
Cc:     bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
        Matthieu Baerts <matthieu.baerts@...sares.net>
Subject: Re: [PATCH bpf v2] bpf: zero-fill re-used per-cpu map element

On Tue, Oct 27, 2020 at 3:15 PM David Verbeiren
<david.verbeiren@...sares.net> wrote:
>
> Zero-fill element values for all other cpus than current, just as
> when not using prealloc. This is the only way the bpf program can
> ensure known initial values for all cpus ('onallcpus' cannot be
> set when coming from the bpf program).
>
> The scenario is: bpf program inserts some elements in a per-cpu
> map, then deletes some (or userspace does). When later adding
> new elements using bpf_map_update_elem(), the bpf program can
> only set the value of the new elements for the current cpu.
> When prealloc is enabled, previously deleted elements are re-used.
> Without the fix, values for other cpus remain whatever they were
> when the re-used entry was previously freed.
>
> Fixes: 6c9059817432 ("bpf: pre-allocate hash map elements")
> Acked-by: Matthieu Baerts <matthieu.baerts@...sares.net>
> Signed-off-by: David Verbeiren <david.verbeiren@...sares.net>
> ---
>

Looks good, but would be good to have a unit test (see below). Maybe
in a follow up.

Acked-by: Andrii Nakryiko <andrii@...nel.org>

> Notes:
>     v2:
>       - Moved memset() to separate pcpu_init_value() function,
>         which replaces pcpu_copy_value() but delegates to it
>         for the cases where no memset() is needed (Andrii).
>       - This function now also avoids doing the memset() for
>         the current cpu for which the value must be set
>         anyhow (Andrii).
>       - Same pcpu_init_value() used for per-cpu LRU map
>         (Andrii).
>
>       Note that I could not test the per-cpu LRU other than
>       by running the bpf selftests. lru_map and maps tests
>       passed but for the rest of the test suite, I don't
>       think I know how to spot problems...

It would be good to write a new selftest specifically for this. You
can create a single-element pre-allocated per-CPU hashmap. From
user-space, initialize it to non-zeros on all CPUs. Then delete that
key (it will get put on the free list). Then trigger BPF program, do
an update (that should take an element from the freelist), doesn't
matter which value you set (could be zero). Then from user-space get
all per-CPU values for than new key. It should all be zeroes with your
fix and non-zero without it.

It sounds more complicated than it would look like in practice :)

>
>       Question: Is it ok to use raw_smp_processor_id() in
>       these contexts? bpf prog context should be fine, I think.
>       Is it also ok in the syscall context?

>From the BPF program side it's definitely ok, because we disable CPU
migration even for sleepable programs. For syscall context, it always
uses onallcpus=true, so we'll never run this logic from syscall
context. So I think it's fine.

>
>  kernel/bpf/hashtab.c | 30 ++++++++++++++++++++++++++++--
>  1 file changed, 28 insertions(+), 2 deletions(-)

[...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ