[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190523054610.GR2422@oracle.com>
Date: Thu, 23 May 2019 01:46:10 -0400
From: Kris Van Hees <kris.van.hees@...cle.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Kris Van Hees <kris.van.hees@...cle.com>,
Steven Rostedt <rostedt@...dmis.org>, netdev@...r.kernel.org,
bpf@...r.kernel.org, dtrace-devel@....oracle.com,
linux-kernel@...r.kernel.org, mhiramat@...nel.org, acme@...nel.org,
ast@...nel.org, daniel@...earbox.net, peterz@...radead.org
Subject: Re: [RFC PATCH 00/11] bpf, trace, dtrace: DTrace BPF program type
implementation and sample use
On Wed, May 22, 2019 at 01:53:31PM -0700, Alexei Starovoitov wrote:
> On Wed, May 22, 2019 at 01:23:27AM -0400, Kris Van Hees wrote:
> >
> > Userspace aside, there are various features that are not currently available
> > such as retrieving the ppid of the current task, and various other data items
> > that relate to the current task that triggered a probe. There are ways to
> > work around it (using the bpf_probe_read() helper, which actually performs a
> > probe_kernel_read()) but that is rather clunky
>
> Sounds like you're admiting that the access to all kernel data structures
> is actually available, but you don't want to change user space to use it?
I of course agree that access to all kernel structures can be done using the
bpf_probe_read() helper. But I hope you agree that the availability of that
helper doesn't mean that there is no room for more elegant ways to access
information. There are already helpers (e.g. bpf_get_current_pid_tgid) that
could be replaced by BPF code that uses bpf_probe_read to accomplish the same
thing.
> > triggered the execution. Often, a single DTrace clause is associated with
> > multiple probes, of different types. Probes in the kernel (kprobe, perf event,
> > tracepoint, ...) are associated with their own BPF program type, so it is not
> > possible to load the DTrace clause (translated into BPF code) once and
> > associate it with probes of different types. Instead, I'd have to load it
> > as a BPF_PROG_TYPE_KPROBE program to associate it with a kprobe, and I'd have
> > to load it as a BPF_PROG_TYPE_TRACEPOINT program to associate it with a
> > tracepoint, and so on. This also means that I suddenly have to add code to
> > the userspace component to know about the different program types with more
> > detail, like what helpers are available to specific program types.
>
> That also sounds that there is a solution, but you don't want to change user space ?
I think there is a difference between a solution and a good solution. Adding
a lot of knowledge in the userspace component about how things are imeplemented
at the kernel level makes for a more fragile infrastructure and involves
breaking down well established boundaries in DTrace that are part of the design
specifically to ensure that userspace doesn't need to depend on such intimate
knowledge.
> > Another advantage of being able to operate on a more abstract probe concept
> > that is not tied to a specific probe type is that the userspace component does
> > not need to know about the implementation details of the specific probes.
>
> If that is indeed the case that dtrace is broken _by design_
> and nothing on the kernel side can fix it.
>
> bpf prog attached to NMI is running in NMI.
> That is very different execution context vs kprobe.
> kprobe execution context is also different from syscall.
>
> The user writing the script has to be aware in what context
> that script will be executing.
The design behind DTrace definitely recognizes that different types of probes
operate in different ways and have different data associated with them. That
is why probes (in legacy DTrace) are managed by providers, one for each type
of probe. The providers handle the specifics of a probe type, and provide a
generic probe API to the processing component of DTrace:
SDT probes -----> SDT provider -------+
|
FBT probes -----> FBT provider -------+--> DTrace engine
|
syscall probes -> systrace provider --+
This means that the DTrace processing component can be implemented based on a
generic probe concept, and the providers will take care of the specifics. In
that sense, it is similar to so many other parts of the kernel where a generic
API is exposed so that higher level components don't need to know implementation
details.
In DTrace, people write scripts based on UAPI-style interfaces and they don't
have to concern themselves with e.g. knowing how to get the value of the 3rd
argument that was passed by the firing probe. All they need to know is that
the probe will have a 3rd argument, and that the 3rd argument to *any* probe
can be accessed as 'arg2' (or args[2] for typed arguments, if the provider is
capable of providing that). Different probes have different ways of passing
arguments, and only the provider code for each probe type needs to know how
to retrieve the argument values.
Does this help bring clarity to the reasons why an abstract (generic) probe
concept is part of DTrace's design?
Powered by blists - more mailing lists