lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190524052616.GW2422@oracle.com>
Date:   Fri, 24 May 2019 01:26:16 -0400
From:   Kris Van Hees <kris.van.hees@...cle.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     Steven Rostedt <rostedt@...dmis.org>,
        Kris Van Hees <kris.van.hees@...cle.com>,
        netdev@...r.kernel.org, bpf@...r.kernel.org,
        dtrace-devel@....oracle.com, linux-kernel@...r.kernel.org,
        mhiramat@...nel.org, acme@...nel.org, ast@...nel.org,
        daniel@...earbox.net, peterz@...radead.org
Subject: Re: [RFC PATCH 00/11] bpf, trace, dtrace: DTrace BPF program type
 implementation and sample use

On Thu, May 23, 2019 at 07:08:51PM -0700, Alexei Starovoitov wrote:
> On Thu, May 23, 2019 at 09:57:37PM -0400, Steven Rostedt wrote:
> > On Thu, 23 May 2019 17:31:50 -0700
> > Alexei Starovoitov <alexei.starovoitov@...il.com> wrote:
> > 
> > 
> > > > Now from what I'm reading, it seams that the Dtrace layer may be
> > > > abstracting out fields from the kernel. This is actually something I
> > > > have been thinking about to solve the "tracepoint abi" issue. There's
> > > > usually basic ideas that happen. An interrupt goes off, there's a
> > > > handler, etc. We could abstract that out that we trace when an
> > > > interrupt goes off and the handler happens, and record the vector
> > > > number, and/or what device it was for. We have tracepoints in the
> > > > kernel that do this, but they do depend a bit on the implementation.
> > > > Now, if we could get a layer that abstracts this information away from
> > > > the implementation, then I think that's a *good* thing.  
> > > 
> > > I don't like this deferred irq idea at all.
> > 
> > What do you mean deferred?
> 
> that's how I interpreted your proposal: 
> "interrupt goes off and the handler happens, and record the vector number"
> It's not a good thing to tell about irq later.
> Just like saying lets record perf counter event and report it later.

The abstraction I mentioned does not defer anything - it merely provides a way
for all probe events to be processed as a generic probe with a set of values
associated with it (e.g. syscall arguments for a syscall entry probe).  The
program that implements what needs to happen when that probe fires still does
whatever is necessary to collect information, and dump data in the output
buffers before execution continues.

I could trace entry into a syscall by using a syscall entry tracepoint or by
putting a kprobe on the syscall function itself.  I am usually interested in
whether the syscall was called, what the arguments were, and perhaps I need to
collect some other data related to it.  More often than not, both probes would
get the job done.  With an abstraction that hides the implementation details
of the probe mechanism itself, both cases are essentially the same.

> > > Abstracting details from the users is _never_ a good idea.
> > 
> > Really? Most everything we do is to abstract details from the user. The
> > key is to make the abstraction more meaningful than the raw data.
> > 
> > > A ton of people use bcc scripts and bpftrace because they want those details.
> > > They need to know what kernel is doing to make better decisions.
> > > Delaying irq record is the opposite.
> > 
> > I never said anything about delaying the record. Just getting the
> > information that is needed.
> > 
> > > > 
> > > > I wish that was totally true, but tracepoints *can* be an abi. I had
> > > > code reverted because powertop required one to be a specific
> > > > format. To this day, the wakeup event has a "success" field that
> > > > writes in a hardcoded "1", because there's tools that depend on it,
> > > > and they only work if there's a success field and the value is 1.  
> > > 
> > > I really think that you should put powertop nightmares to rest.
> > > That was long ago. The kernel is different now.
> > 
> > Is it?
> > 
> > > Linus made it clear several times that it is ok to change _all_
> > > tracepoints. Period. Some maintainers somehow still don't believe
> > > that they can do it.
> > 
> > From what I remember him saying several times, is that you can change
> > all tracepoints, but if it breaks a tool that is useful, then that
> > change will get reverted. He will allow you to go and fix that tool and
> > bring back the change (which was the solution to powertop).
> 
> my interpretation is different.
> We changed tracepoints. It broke scripts. People changed scripts.

In my world, the sequence is more like: tracepoints get changed, scripts
break, I fix the provider (abstraction), scripts work again.  Users really
appreciate that aspect because many of our users are not kernel experts.

> > > Some tracepoints are used more than others and more people will
> > > complain: "ohh I need to change my script" when that tracepoint
> > > changes. But the kernel development is not going to be hampered by a
> > > tracepoint. No matter how widespread its usage in scripts.
> > 
> > That's because we'll treat bpf (and Dtrace) scripts like modules (no
> > abi), at least we better. But if there's a tool that doesn't use the
> > script and reads the tracepoint directly via perf, then that's a
> > different story.
> 
> absolutely not.
> tracepoint is a tracepoint. It can change regardless of what
> and how is using it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ