lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Nov 2022 09:48:27 +0800
From:   Hao Sun <sunhao.th@...il.com>
To:     Jiri Olsa <olsajiri@...il.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Andrii Nakryiko <andrii@...nel.org>, bpf <bpf@...r.kernel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Hao Luo <haoluo@...gle.com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>,
        Martin KaFai Lau <martin.lau@...ux.dev>,
        Stanislav Fomichev <sdf@...gle.com>,
        Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>
Subject: Re: WARNING in bpf_bprintf_prepare

Jiri Olsa <olsajiri@...il.com> 于2022年11月15日周二 06:47写道:
>
> On Mon, Nov 14, 2022 at 09:04:57AM +0100, Jiri Olsa wrote:
> > On Sat, Nov 12, 2022 at 12:02:50AM +0800, Hao Sun wrote:
> > > Jiri Olsa <olsajiri@...il.com> 于2022年11月11日周五 22:45写道:
> > > >
> > > > On Thu, Nov 10, 2022 at 12:53:16AM +0100, Jiri Olsa wrote:
> > > >
> > > > SNIP
> > > >
> > > > > > > > > ---
> > > > > > > > > diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
> > > > > > > > > index 6a13220d2d27..5a354ae096e5 100644
> > > > > > > > > --- a/include/trace/bpf_probe.h
> > > > > > > > > +++ b/include/trace/bpf_probe.h
> > > > > > > > > @@ -78,11 +78,15 @@
> > > > > > > > >  #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
> > > > > > > > >
> > > > > > > > >  #define __BPF_DECLARE_TRACE(call, proto, args)                         \
> > > > > > > > > +static DEFINE_PER_CPU(int, __bpf_trace_tp_active_##call);              \
> > > > > > > > >  static notrace void                                                    \
> > > > > > > > >  __bpf_trace_##call(void *__data, proto)                                        \
> > > > > > > > >  {                                                                      \
> > > > > > > > >         struct bpf_prog *prog = __data;                                 \
> > > > > > > > > -       CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args));  \
> > > > > > > > > +                                                                       \
> > > > > > > > > +       if (likely(this_cpu_inc_return(__bpf_trace_tp_active_##call) == 1))             \
> > > > > > > > > +               CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args));  \
> > > > > > > > > +       this_cpu_dec(__bpf_trace_tp_active_##call);                                     \
> > > > > > > > >  }
> > > > > > > >
> > > > > > > > This approach will hurt real use cases where
> > > > > > > > multiple and different raw_tp progs run on the same cpu.
> > > > > > >
> > > > > > > would the 2 levels of nesting help in here?
> > > > > > >
> > > > > > > I can imagine the change above would break use case where we want to
> > > > > > > trigger tracepoints in irq context that interrupted task that's already
> > > > > > > in the same tracepoint
> > > > > > >
> > > > > > > with 2 levels of nesting we would trigger that tracepoint from irq and
> > > > > > > would still be safe with bpf_bprintf_prepare buffer
> > > > > >
> > > > > > How would these 2 levels work?
> > > > >
> > > > > just using the active counter like below, but I haven't tested it yet
> > > > >
> > > > > jirka
> > > >
> > > > seems to be working
> > > > Hao Sun, could you please test this patch?
> > > >
> > >
> > > Hi Jirka,
> > >
> > > I've tested the proposed patch, the warning from bpf_bprintf_prepare will not
> > > be triggered with the patch, but the reproducer can still trigger the following
> > > warning. My test was conducted on:
> > >
> > > commit:  f67dd6ce0723 Merge tag 'slab-for-6.1-rc4-fixes'
> > > git tree:   upstream
> > > kernel config: https://pastebin.com/raw/sE5QK5HL
> > > C reproducer: https://pastebin.com/raw/X96ASi27
> > > console log *before* the patch: https://pastebin.com/raw/eSCUNFrd
> > > console log *after* the patch: https://pastebin.com/raw/tzcmdWZt
> >
> > thanks for testing.. I can't reproduce this for some reason
> >
> > I'll check some more and perhaps go with denying bpf attachment
> > for this tracepoint as Alexei suggeste
>
> the change below won't allow to attach bpf program with any printk
> helper in contention_begin and bpf_trace_printk tracepoints
>
> I still need to test it on the machine that reproduced the issue
> for me.. meanwhile any feedback is appreciated
>

Hi,

Tested on my machine, the C reproducer won't trigger any issue
this time with the patch. The test was conducted on:

commit:  f67dd6ce0723 Merge tag 'slab-for-6.1-rc4-fixes'
git tree:   upstream
kernel config: https://pastebin.com/raw/sE5QK5HL
C reproducer: https://pastebin.com/raw/X96ASi27
full console log *before* the patch: https://pastebin.com/raw/n3x55RDr
full console log *after* the patch: https://pastebin.com/raw/7HdxnCnL

Thanks
Hao

> thanks,
> jirka
>
> ---
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 798aec816970..d88e0741b381 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1257,7 +1257,8 @@ struct bpf_prog {
>                                 enforce_expected_attach_type:1, /* Enforce expected_attach_type checking at attach time */
>                                 call_get_stack:1, /* Do we call bpf_get_stack() or bpf_get_stackid() */
>                                 call_get_func_ip:1, /* Do we call get_func_ip() */
> -                               tstamp_type_access:1; /* Accessed __sk_buff->tstamp_type */
> +                               tstamp_type_access:1, /* Accessed __sk_buff->tstamp_type */
> +                               call_printk:1; /* Do we call trace_printk/trace_vprintk  */
>         enum bpf_prog_type      type;           /* Type of BPF program */
>         enum bpf_attach_type    expected_attach_type; /* For some prog types */
>         u32                     len;            /* Number of filter blocks */
> diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> index 20749bd9db71..fd2725624fed 100644
> --- a/include/linux/trace_events.h
> +++ b/include/linux/trace_events.h
> @@ -742,7 +742,7 @@ void perf_event_detach_bpf_prog(struct perf_event *event);
>  int perf_event_query_prog_array(struct perf_event *event, void __user *info);
>  int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
>  int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog);
> -struct bpf_raw_event_map *bpf_get_raw_tracepoint(const char *name);
> +struct bpf_raw_event_map *bpf_get_raw_tracepoint(const char *name, struct bpf_prog *prog);
>  void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp);
>  int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
>                             u32 *fd_type, const char **buf,
> @@ -775,7 +775,8 @@ static inline int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf
>  {
>         return -EOPNOTSUPP;
>  }
> -static inline struct bpf_raw_event_map *bpf_get_raw_tracepoint(const char *name)
> +static inline struct bpf_raw_event_map *bpf_get_raw_tracepoint(const char *name,
> +                                                              struct bpf_prog *prog)
>  {
>         return NULL;
>  }
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 85532d301124..d6081e8336c6 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -3281,7 +3281,7 @@ static int bpf_raw_tp_link_attach(struct bpf_prog *prog,
>                 return -EINVAL;
>         }
>
> -       btp = bpf_get_raw_tracepoint(tp_name);
> +       btp = bpf_get_raw_tracepoint(tp_name, prog);
>         if (!btp)
>                 return -ENOENT;
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 07c0259dfc1a..9862345d9249 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -7572,6 +7572,10 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>                 err = __check_func_call(env, insn, insn_idx_p, meta.subprogno,
>                                         set_user_ringbuf_callback_state);
>                 break;
> +       case BPF_FUNC_trace_printk:
> +       case BPF_FUNC_trace_vprintk:
> +               env->prog->call_printk = 1;
> +               break;
>         }
>
>         if (err)
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index f2d8d070d024..9a4652a05690 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2229,10 +2229,32 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
>  extern struct bpf_raw_event_map __start__bpf_raw_tp[];
>  extern struct bpf_raw_event_map __stop__bpf_raw_tp[];
>
> -struct bpf_raw_event_map *bpf_get_raw_tracepoint(const char *name)
> +static int check_printk_denylist(const char *name, struct bpf_prog *prog)
> +{
> +       static const char *denylist[] = {
> +               "contention_begin",
> +               "bpf_trace_printk",
> +       };
> +       int i;
> +
> +       if (!prog->call_printk)
> +               return 0;
> +
> +       for (i = 0; i < ARRAY_SIZE(denylist); i++) {
> +               if (!strcmp(denylist[i], name))
> +                       return 1;
> +       }
> +       return 0;
> +}
> +
> +struct bpf_raw_event_map *bpf_get_raw_tracepoint(const char *name,
> +                                                struct bpf_prog *prog)
>  {
>         struct bpf_raw_event_map *btp = __start__bpf_raw_tp;
>
> +       if (check_printk_denylist(name, prog))
> +               return NULL;
> +
>         for (; btp < __stop__bpf_raw_tp; btp++) {
>                 if (!strcmp(btp->tp->name, name))
>                         return btp;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ