lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzY7sx0gW=o5rM8WDzW1J0U_Yep3MMuJScoMg-hBAeBPCg@mail.gmail.com>
Date:   Fri, 30 Apr 2021 09:31:36 -0700
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     Brendan Jackman <jackmanb@...gle.com>
Cc:     bpf <bpf@...r.kernel.org>, Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        open list <linux-kernel@...r.kernel.org>,
        KP Singh <kpsingh@...nel.org>,
        Florent Revest <revest@...omium.org>
Subject: Re: [PATCH v2 bpf-next] libbpf: Fix signed overflow in ringbuf_process_ring

On Thu, Apr 29, 2021 at 6:05 AM Brendan Jackman <jackmanb@...gle.com> wrote:
>
> One of our benchmarks running in (Google-internal) CI pushes data
> through the ringbuf faster htan than userspace is able to consume
> it. In this case it seems we're actually able to get >INT_MAX entries
> in a single ringbuf_buffer__consume call. ASAN detected that cnt
> overflows in this case.
>
> Fix by using 64-bit counter internally and then capping the result to
> INT_MAX before converting to the int return type.
>
> Fixes: bf99c936f947 (libbpf: Add BPF ring buffer support)
> Signed-off-by: Brendan Jackman <jackmanb@...gle.com>
> ---
>
> diff v1->v2: Now we don't break the loop at INT_MAX, we just cap the reported
> entry count.
>
> Note: I feel a bit guilty about the fact that this makes the reader
> think about implicit conversions. Nobody likes thinking about that.
>
> But explicit casts don't really help with clarity:
>
>   return (int)min(cnt, (int64_t)INT_MAX); // ugh
>

I'd go with

if (cnt > INT_MAX)
    return INT_MAX;

return cnt;

If you don't mind, I can patch it up while applying?

> shrug..
>
>  tools/lib/bpf/ringbuf.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
> index e7a8d847161f..2e114c2d0047 100644
> --- a/tools/lib/bpf/ringbuf.c
> +++ b/tools/lib/bpf/ringbuf.c
> @@ -204,7 +204,9 @@ static inline int roundup_len(__u32 len)
>
>  static int ringbuf_process_ring(struct ring* r)
>  {
> -       int *len_ptr, len, err, cnt = 0;
> +       int *len_ptr, len, err;
> +       /* 64-bit to avoid overflow in case of extreme application behavior */
> +       int64_t cnt = 0;
>         unsigned long cons_pos, prod_pos;
>         bool got_new_data;
>         void *sample;
> @@ -240,7 +242,7 @@ static int ringbuf_process_ring(struct ring* r)
>                 }
>         } while (got_new_data);
>  done:
> -       return cnt;
> +       return min(cnt, INT_MAX);
>  }
>
>  /* Consume available ring buffer(s) data without event polling.
> @@ -263,8 +265,8 @@ int ring_buffer__consume(struct ring_buffer *rb)
>  }
>
>  /* Poll for available data and consume records, if any are available.
> - * Returns number of records consumed, or negative number, if any of the
> - * registered callbacks returned error.
> + * Returns number of records consumed (or INT_MAX, whichever is less), or
> + * negative number, if any of the registered callbacks returned error.
>   */
>  int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
>  {
> --
> 2.31.1.498.g6c1eba8ee3d-goog
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ