lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 30 Mar 2022 09:34:11 -0700
From:   Beau Belgrave <beaub@...ux.microsoft.com>
To:     Song Liu <song@...nel.org>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        linux-trace-devel <linux-trace-devel@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
        Network Development <netdev@...r.kernel.org>,
        linux-arch <linux-arch@...r.kernel.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [PATCH] tracing/user_events: Add eBPF interface for user_event
 created events

On Wed, Mar 30, 2022 at 09:06:24AM -0700, Song Liu wrote:
> On Tue, Mar 29, 2022 at 4:11 PM Beau Belgrave <beaub@...ux.microsoft.com> wrote:
> >
> > On Tue, Mar 29, 2022 at 03:31:31PM -0700, Alexei Starovoitov wrote:
> > > On Tue, Mar 29, 2022 at 1:11 PM Beau Belgrave <beaub@...ux.microsoft.com> wrote:
> > > >
> > > > On Tue, Mar 29, 2022 at 12:50:40PM -0700, Alexei Starovoitov wrote:
> > > > > On Tue, Mar 29, 2022 at 11:19 AM Beau Belgrave
> > > > > <beaub@...ux.microsoft.com> wrote:
> > > > > >
> > > > > > Send user_event data to attached eBPF programs for user_event based perf
> > > > > > events.
> > > > > >
> > > > > > Add BPF_ITER flag to allow user_event data to have a zero copy path into
> > > > > > eBPF programs if required.
> > > > > >
> > > > > > Update documentation to describe new flags and structures for eBPF
> > > > > > integration.
> > > > > >
> > > > > > Signed-off-by: Beau Belgrave <beaub@...ux.microsoft.com>
> > > > >
> > > > > The commit describes _what_ it does, but says nothing about _why_.
> > > > > At present I see no use out of bpf and user_events connection.
> > > > > The whole user_events feature looks redundant to me.
> > > > > We have uprobes and usdt. It doesn't look to me that
> > > > > user_events provide anything new that wasn't available earlier.
> > > >
> > > > A lot of the why, in general, for user_events is covered in the first
> > > > change in the series.
> > > > Link: https://lore.kernel.org/all/20220118204326.2169-1-beaub@linux.microsoft.com/
> > > >
> > > > The why was also covered in Linux Plumbers Conference 2021 within the
> > > > tracing microconference.
> > > >
> > > > An example of why we want user_events:
> > > > Managed code running that emits data out via Open Telemetry.
> > > > Since it's managed there isn't a stub location to patch, it moves.
> > > > We watch the Open Telemetry spans in an eBPF program, when a span takes
> > > > too long we collect stack data and perform other actions.
> > > > With user_events and perf we can monitor the entire system from the root
> > > > container without having to have relay agents within each
> > > > cgroup/namespace taking up resources.
> > > > We do not need to enter each cgroup mnt space and determine the correct
> > > > patch location or the right version of each binary for processes that
> > > > use user_events.
> > > >
> > > > An example of why we want eBPF integration:
> > > > We also have scenarios where we are live decoding the data quickly.
> > > > Having user_data fed directly to eBPF lets us cast the data coming in to
> > > > a struct and decode very very quickly to determine if something is
> > > > wrong.
> > > > We can take that data quickly and put it into maps to perform further
> > > > aggregation as required.
> > > > We have scenarios that have "skid" problems, where we need to grab
> > > > further data exactly when the process that had the problem was running.
> > > > eBPF lets us do all of this that we cannot easily do otherwise.
> > > >
> > > > Another benefit from user_events is the tracing is much faster than
> > > > uprobes or others using int 3 traps. This is critical to us to enable on
> > > > production systems.
> > >
> > > None of it makes sense to me.
> >
> > Sorry.
> >
> > > To take advantage of user_events user space has to be modified
> > > and writev syscalls inserted.
> >
> > Yes, both user_events and lttng require user space modifications to do
> > tracing correctly. The syscall overheads are real, and the cost depends
> > on the mitigations around spectre/meltdown.
> >
> > > This is not cheap and I cannot see a production system using this interface.
> >
> > But you are fine with uprobe costs? uprobes appear to be much more costly
> > than a syscall approach on the hardware I've run on.
> 
> Can we achieve the same/similar performance with sys_bpf(BPF_PROG_RUN)?
> 

I think so, the tough part is how do you let the user-space know which
program is attached to run? In the current code this is done by the BPF
program attaching to the event via perf and we run the one there if
any when data is emitted out via write calls.

I would want to make sure that operators can decide where the user-space
data goes (perf/ftrace/eBPF) after the code has been written. With the
current code this is done via the tracepoint callbacks that perf/ftrace
hook up when operators enable recording via perf, tracefs, libbpf, etc.

We have managed code (C#/Java) where we cannot utilize stubs or traps
easily due to code movement. So we are limited in how we can approach
this problem. Having the interface be mmap/write has enabled this
for us, since it's easy to interact with in most languages and gives us
lifetime management of the trace objects between user-space and the
kernel.

> Thanks,
> Song
> 
> >
> > > All you did is a poor man version of lttng that doesn't rely
> > > on such heavy instrumentation.
> >
> > Well I am a frugal person. :)
> >
> > This work has solved some critical issues we've been having, and I would
> > appreciate a review of the code if possible.
> >
> > Thanks,
> > -Beau

Thanks,
-Beau

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ