lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQKq+b7uJb0J32swWEZmoDfdrUfx=f8ndSM4vicTCtYebA@mail.gmail.com>
Date:   Tue, 11 May 2021 14:07:19 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Florent Revest <revest@...omium.org>
Cc:     bpf <bpf@...r.kernel.org>, Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        KP Singh <kpsingh@...nel.org>,
        Brendan Jackman <jackmanb@...gle.com>,
        Stanislav Fomichev <sdf@...gle.com>,
        LKML <linux-kernel@...r.kernel.org>,
        syzbot+63122d0bc347f18c1884@...kaller.appspotmail.com
Subject: Re: [PATCH bpf v2] bpf: Fix nested bpf_bprintf_prepare with more
 per-cpu buffers

On Tue, May 11, 2021 at 1:12 AM Florent Revest <revest@...omium.org> wrote:
>
> The bpf_seq_printf, bpf_trace_printk and bpf_snprintf helpers share one
> per-cpu buffer that they use to store temporary data (arguments to
> bprintf). They "get" that buffer with try_get_fmt_tmp_buf and "put" it
> by the end of their scope with bpf_bprintf_cleanup.
>
> If one of these helpers gets called within the scope of one of these
> helpers, for example: a first bpf program gets called, uses
> bpf_trace_printk which calls raw_spin_lock_irqsave which is traced by
> another bpf program that calls bpf_snprintf, then the second "get"
> fails. Essentially, these helpers are not re-entrant. They would return
> -EBUSY and print a warning message once.
>
> This patch triples the number of bprintf buffers to allow three levels
> of nesting. This is very similar to what was done for tracepoints in
> "9594dc3c7e7 bpf: fix nested bpf tracepoints with per-cpu data"
>
> Fixes: d9c9e4db186a ("bpf: Factorize bpf_trace_printk and bpf_seq_printf")
> Reported-by: syzbot+63122d0bc347f18c1884@...kaller.appspotmail.com
> Signed-off-by: Florent Revest <revest@...omium.org>
> ---
>  kernel/bpf/helpers.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 544773970dbc..ef658a9ea5c9 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -696,34 +696,35 @@ static int bpf_trace_copy_string(char *buf, void *unsafe_ptr, char fmt_ptype,
>   */
>  #define MAX_PRINTF_BUF_LEN     512
>
> -struct bpf_printf_buf {
> -       char tmp_buf[MAX_PRINTF_BUF_LEN];
> +/* Support executing three nested bprintf helper calls on a given CPU */
> +struct bpf_bprintf_buffers {
> +       char tmp_bufs[3][MAX_PRINTF_BUF_LEN];
>  };
> -static DEFINE_PER_CPU(struct bpf_printf_buf, bpf_printf_buf);
> -static DEFINE_PER_CPU(int, bpf_printf_buf_used);
> +static DEFINE_PER_CPU(struct bpf_bprintf_buffers, bpf_bprintf_bufs);
> +static DEFINE_PER_CPU(int, bpf_bprintf_nest_level);
>
>  static int try_get_fmt_tmp_buf(char **tmp_buf)
>  {
> -       struct bpf_printf_buf *bufs;
> -       int used;
> +       struct bpf_bprintf_buffers *bufs;
> +       int nest_level;
>
>         preempt_disable();
> -       used = this_cpu_inc_return(bpf_printf_buf_used);
> -       if (WARN_ON_ONCE(used > 1)) {
> -               this_cpu_dec(bpf_printf_buf_used);
> +       nest_level = this_cpu_inc_return(bpf_bprintf_nest_level);
> +       if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(bufs->tmp_bufs))) {
> +               this_cpu_dec(bpf_bprintf_nest_level);

Applied to bpf tree.
I think at the end the fix is simple enough and much better than an
on-stack buffer.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ