lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKH8qBtS9UHTVZ8PgFd2fOS1k6MLxot_SDBg2+H5BhoqQTOcGg@mail.gmail.com>
Date:   Mon, 31 Oct 2022 14:53:35 -0700
From:   Stanislav Fomichev <sdf@...gle.com>
To:     Kees Cook <keescook@...omium.org>
Cc:     Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        John Fastabend <john.fastabend@...il.com>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <martin.lau@...ux.dev>,
        Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>,
        KP Singh <kpsingh@...nel.org>, Hao Luo <haoluo@...gle.com>,
        Jiri Olsa <jolsa@...nel.org>, bpf@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 3/3] bpf/verifier: Take advantage of full
 allocation sizes

On Fri, Oct 28, 2022 at 7:54 PM Kees Cook <keescook@...omium.org> wrote:
>
> Since the full kmalloc bucket size is being explicitly allocated, pass
> back the resulting details to take advantage of the full size so that
> reallocation checking will be needed less frequently.
>
> Cc: Alexei Starovoitov <ast@...nel.org>
> Cc: Daniel Borkmann <daniel@...earbox.net>
> Cc: John Fastabend <john.fastabend@...il.com>
> Cc: Andrii Nakryiko <andrii@...nel.org>
> Cc: Martin KaFai Lau <martin.lau@...ux.dev>
> Cc: Song Liu <song@...nel.org>
> Cc: Yonghong Song <yhs@...com>
> Cc: KP Singh <kpsingh@...nel.org>
> Cc: Stanislav Fomichev <sdf@...gle.com>
> Cc: Hao Luo <haoluo@...gle.com>
> Cc: Jiri Olsa <jolsa@...nel.org>
> Cc: bpf@...r.kernel.org
> Signed-off-by: Kees Cook <keescook@...omium.org>
> ---
>  kernel/bpf/verifier.c | 27 ++++++++++++++++-----------
>  1 file changed, 16 insertions(+), 11 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 1c040d27b8f6..e58b554e862b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1020,20 +1020,23 @@ static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t
>         return dst ? dst : ZERO_SIZE_PTR;
>  }
>
> -/* resize an array from old_n items to new_n items. the array is reallocated if it's too
> - * small to hold new_n items. new items are zeroed out if the array grows.
> +/* Resize an array from old_n items to *new_n items. The array is
> + * reallocated if it's too small to hold *new_n items. New items are
> + * zeroed out if the array grows. Allocation is rounded up to next kmalloc
> + * bucket size to reduce frequency of resizing. *new_n contains the new
> + * total number of items that will fit.
>   *
> - * Contrary to krealloc_array, does not free arr if new_n is zero.
> + * Contrary to krealloc, does not free arr if new_n is zero.
>   */
> -static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
> +static void *realloc_array(void *arr, size_t old_n, size_t *new_n, size_t size)
>  {
>         size_t alloc_size;
>         void *new_arr;
>
> -       if (!new_n || old_n == new_n)
> +       if (!new_n || !*new_n || old_n == *new_n)
>                 goto out;
>
> -       alloc_size = kmalloc_size_roundup(size_mul(new_n, size));
> +       alloc_size = kmalloc_size_roundup(size_mul(*new_n, size));
>         new_arr = krealloc(arr, alloc_size, GFP_KERNEL);
>         if (!new_arr) {
>                 kfree(arr);
> @@ -1041,8 +1044,9 @@ static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size)
>         }
>         arr = new_arr;
>
> -       if (new_n > old_n)
> -               memset(arr + old_n * size, 0, (new_n - old_n) * size);
> +       *new_n = alloc_size / size;
> +       if (*new_n > old_n)
> +               memset(arr + old_n * size, 0, (*new_n - old_n) * size);
>
>  out:
>         return arr ? arr : ZERO_SIZE_PTR;

[..]

> @@ -1074,7 +1078,7 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st
>
>  static int resize_reference_state(struct bpf_func_state *state, size_t n)
>  {
> -       state->refs = realloc_array(state->refs, state->acquired_refs, n,
> +       state->refs = realloc_array(state->refs, state->acquired_refs, &n,
>                                     sizeof(struct bpf_reference_state));
>         if (!state->refs)
>                 return -ENOMEM;

Patches 1 & 2 look good, but not sure about this part. We later do the
following in the same routine:

state->acquired_refs = n;

And acquire_reference_state() does "new_ofs = state->acquired_refs;" to append..

Which changes semantics a bit? It used to mean array size, now it
means array capacity.
Should we keep this part as is but add a shortcut to realloc_array
when ksize(ptr) == kmalloc_size_roundup(new size) -> reuse existing
array?
Or am I missing something? (haven't looked too deep)





> @@ -1090,11 +1094,12 @@ static int grow_stack_state(struct bpf_func_state *state, int size)
>         if (old_n >= n)
>                 return 0;
>
> -       state->stack = realloc_array(state->stack, old_n, n, sizeof(struct bpf_stack_state));
> +       state->stack = realloc_array(state->stack, old_n, &n,
> +                                    sizeof(struct bpf_stack_state));
>         if (!state->stack)
>                 return -ENOMEM;
>
> -       state->allocated_stack = size;
> +       state->allocated_stack = n * BPF_REG_SIZE;
>         return 0;
>  }
>
> --
> 2.34.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ