lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 8 Jan 2020 11:24:06 +0100
From:   Jiri Olsa <jolsa@...hat.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     Jiri Olsa <jolsa@...nel.org>, Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Network Development <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>, Andrii Nakryiko <andriin@...com>,
        Yonghong Song <yhs@...com>, Martin KaFai Lau <kafai@...com>,
        Jakub Kicinski <jakub.kicinski@...ronome.com>,
        David Miller <davem@...hat.com>
Subject: Re: [PATCH 2/5] bpf: Add bpf_perf_event_output_kfunc

On Tue, Jan 07, 2020 at 02:13:42PM -0800, Alexei Starovoitov wrote:
> On Tue, Jan 7, 2020 at 4:25 AM Jiri Olsa <jolsa@...hat.com> wrote:
> >
> > On Mon, Jan 06, 2020 at 03:27:21PM -0800, Alexei Starovoitov wrote:
> > > On Sun, Dec 29, 2019 at 03:37:37PM +0100, Jiri Olsa wrote:
> > > > Adding support to use perf_event_output in
> > > > BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs.
> > > >
> > > > There are no pt_regs available in the trampoline,
> > > > so getting one via bpf_kfunc_regs array.
> > > >
> > > > Signed-off-by: Jiri Olsa <jolsa@...nel.org>
> > > > ---
> > > >  kernel/trace/bpf_trace.c | 67 ++++++++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 67 insertions(+)
> > > >
> > > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > > > index e5ef4ae9edb5..1b270bbd9016 100644
> > > > --- a/kernel/trace/bpf_trace.c
> > > > +++ b/kernel/trace/bpf_trace.c
> > > > @@ -1151,6 +1151,69 @@ raw_tp_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> > > >     }
> > > >  }
> > > >
> > > > +struct bpf_kfunc_regs {
> > > > +   struct pt_regs regs[3];
> > > > +};
> > > > +
> > > > +static DEFINE_PER_CPU(struct bpf_kfunc_regs, bpf_kfunc_regs);
> > > > +static DEFINE_PER_CPU(int, bpf_kfunc_nest_level);
> > >
> > > Thanks a bunch for working on it.
> > >
> > > I don't understand why new regs array and nest level is needed.
> > > Can raw_tp_prog_func_proto() be reused as-is?
> > > Instead of patches 2,3,4 ?
> >
> > I thought that we might want to trace functions within the
> > raw tracepoint call, which would be prevented if we used
> > the same nest variable
> >
> > now I'm not sure if there's not some other issue with nesting
> > bpf programs like that.. I'll need to check
> 
> but nesting is what bpf_raw_tp_nest_level suppose to solve, no?
> I just realized that we already have three *_nest_level counters
> in that file. Not sure why one is not enough.
> There was an issue in the past when tracepoint, kprobe and skb
> collided and we had nasty memory corruption, but that was before
> _nest_level was introduced. Not sure how we got to three independent
> counters.

ok, I'm not sure what was the initial impuls for that now,
I'll make it share the counter with raw tracepoints

jirka

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ