lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 11 Nov 2022 15:45:41 +0100
From:   Jiri Olsa <olsajiri@...il.com>
To:     Jiri Olsa <olsajiri@...il.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Hao Sun <sunhao.th@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Andrii Nakryiko <andrii@...nel.org>, bpf <bpf@...r.kernel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Hao Luo <haoluo@...gle.com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>,
        Martin KaFai Lau <martin.lau@...ux.dev>,
        Stanislav Fomichev <sdf@...gle.com>,
        Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>
Subject: Re: WARNING in bpf_bprintf_prepare

On Thu, Nov 10, 2022 at 12:53:16AM +0100, Jiri Olsa wrote:

SNIP

> > > > > ---
> > > > > diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
> > > > > index 6a13220d2d27..5a354ae096e5 100644
> > > > > --- a/include/trace/bpf_probe.h
> > > > > +++ b/include/trace/bpf_probe.h
> > > > > @@ -78,11 +78,15 @@
> > > > >  #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
> > > > >
> > > > >  #define __BPF_DECLARE_TRACE(call, proto, args)                         \
> > > > > +static DEFINE_PER_CPU(int, __bpf_trace_tp_active_##call);              \
> > > > >  static notrace void                                                    \
> > > > >  __bpf_trace_##call(void *__data, proto)                                        \
> > > > >  {                                                                      \
> > > > >         struct bpf_prog *prog = __data;                                 \
> > > > > -       CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args));  \
> > > > > +                                                                       \
> > > > > +       if (likely(this_cpu_inc_return(__bpf_trace_tp_active_##call) == 1))             \
> > > > > +               CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args));  \
> > > > > +       this_cpu_dec(__bpf_trace_tp_active_##call);                                     \
> > > > >  }
> > > >
> > > > This approach will hurt real use cases where
> > > > multiple and different raw_tp progs run on the same cpu.
> > >
> > > would the 2 levels of nesting help in here?
> > >
> > > I can imagine the change above would break use case where we want to
> > > trigger tracepoints in irq context that interrupted task that's already
> > > in the same tracepoint
> > >
> > > with 2 levels of nesting we would trigger that tracepoint from irq and
> > > would still be safe with bpf_bprintf_prepare buffer
> > 
> > How would these 2 levels work?
> 
> just using the active counter like below, but I haven't tested it yet
> 
> jirka

seems to be working
Hao Sun, could you please test this patch?

thanks,
jirka
> 
> 
> ---
> diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
> index 6a13220d2d27..ca5dd34478b7 100644
> --- a/include/trace/bpf_probe.h
> +++ b/include/trace/bpf_probe.h
> @@ -78,11 +78,15 @@
>  #define CAST_TO_U64(...) CONCATENATE(__CAST, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
>  
>  #define __BPF_DECLARE_TRACE(call, proto, args)				\
> +static DEFINE_PER_CPU(int, __bpf_trace_tp_active_##call);		\
>  static notrace void							\
>  __bpf_trace_##call(void *__data, proto)					\
>  {									\
>  	struct bpf_prog *prog = __data;					\
> -	CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args));	\
> +									\
> +	if (likely(this_cpu_inc_return(__bpf_trace_tp_active_##call) < 3))		\
> +		CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args));	\
> +	this_cpu_dec(__bpf_trace_tp_active_##call);					\
>  }
>  
>  #undef DECLARE_EVENT_CLASS

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ