lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191022175052.frjzlnjjfwwfov64@ast-mbp.dhcp.thefacebook.com>
Date:   Tue, 22 Oct 2019 10:50:56 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>,
        Nadav Amit <nadav.amit@...il.com>,
        Andy Lutomirski <luto@...nel.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Song Liu <songliubraving@...com>,
        Masami Hiramatsu <mhiramat@...nel.org>
Subject: Re: [PATCH 3/3] x86/ftrace: Use text_poke()

On Tue, Oct 22, 2019 at 09:44:55AM -0400, Steven Rostedt wrote:
> On Tue, 22 Oct 2019 07:19:56 -0400
> Steven Rostedt <rostedt@...dmis.org> wrote:
> 
> > > I'm not touching dyn_ftrace.
> > > Actually calling my stuff ftrace+bpf is probably not correct either.
> > > I'm reusing code patching of nop into call that ftrace does. That's it.
> > > Turned out I cannot use 99% of ftrace facilities.
> > > ftrace_caller, ftrace_call, ftrace_ops_list_func and the whole ftrace api
> > > with ip, parent_ip and pt_regs cannot be used for this part of the work.
> > > bpf prog needs to access raw function arguments. To achieve that I'm  
> > 
> > You can do that today with the ftrace facility, just like live patching
> > does. You register a ftrace_ops with the flag FTRACE_OPS_FL_IPMODIFY,
> > and your func will set the regs->ip to your bpf handler. When the
> > ftrace_ops->func returns, instead of going back to the called
> > function, it can jump to your bpf_handler. You can create a shadow stack
> > (like function graph tracer does) to save the return address for where
> > you bpf handler needs to return to. As your bpf_handler needs raw
> > access to the parameters, it may not even need the shadow stack because
> > it should know the function it is reading the parameters from.
> 
> To show just how easy this is, I wrote up a quick hack that hijacks the
> wake_up_process() function and adds a trace_printk() to see what was
> woken up. My output from the trace is this:
> 
>           <idle>-0     [007] ..s1    68.517276: my_wake_up: We are waking up rcu_preempt:10
>            <...>-1240  [001] ....    68.517727: my_wake_up: We are waking up kthreadd:2
>            <...>-1240  [001] d..1    68.517973: my_wake_up: We are waking up kworker/1:0:17
>             bash-1188  [003] d..2    68.519020: my_wake_up: We are waking up kworker/u16:3:140
>             bash-1188  [003] d..2    68.519138: my_wake_up: We are waking up kworker/u16:3:140
>             sshd-1187  [005] d.s2    68.519295: my_wake_up: We are waking up kworker/5:2:517
>           <idle>-0     [007] ..s1    68.522293: my_wake_up: We are waking up rcu_preempt:10
>           <idle>-0     [007] ..s1    68.526309: my_wake_up: We are waking up rcu_preempt:10
> 
> I added the code to the trace-event-sample.c sample module, and got the
> above when I loaded that module (modprobe trace-event-sample).
> 
> It's mostly non arch specific (that is, you can do this with any
> arch that supports the IPMODIFY flag). The only parts that would need
> arch specific code is the regs->ip compare. The pip check can also be
> done less "hacky". But this shows you how easy this can be done today.
> Not sure what is missing that you need.

yes. I've studied all these possibilites, tried a bunch ways to extend
and reuse ftrace_ops, but every time found something fundamental to
the existing ftrace design that doesn't work for bpf usage.
See below:

> Here's the patch:
> 
> diff --git a/samples/trace_events/trace-events-sample.c b/samples/trace_events/trace-events-sample.c
> index 1a72b7d95cdc..526a6098c811 100644
> --- a/samples/trace_events/trace-events-sample.c
> +++ b/samples/trace_events/trace-events-sample.c
> @@ -11,6 +11,41 @@
>  #define CREATE_TRACE_POINTS
>  #include "trace-events-sample.h"
>  
> +#include <linux/ftrace.h>
> +
> +int wake_up_process(struct task_struct *p);
> +
> +int x;
> +
> +static int my_wake_up(struct task_struct *p)
> +{
> +	int ret;
> +
> +	trace_printk("We are waking up %s:%d\n", p->comm, p->pid);
> +	ret = wake_up_process(p);
> +	/* Force not having a tail call */
> +	if (!x)
> +		return ret;
> +	return 0;
> +}
> +
> +static void my_hijack_func(unsigned long ip, unsigned long pip,
> +			   struct ftrace_ops *ops, struct pt_regs *regs)

1.
To pass regs into the callback ftrace_regs_caller() has huge amount
of stores to do. Saving selector registers is not fast. pushf/popf are even slower.
~200 bytes of stack is being used for save/restore.
This is pure overhead that bpf execution cannot afford.
bpf is different from traditional ftrace and other tracing, since
it's often active 24/7. Every nanosecond counts.
So for bpf I'm generating assembler trampoline tailored to specific kernel
function that has its fentry nop replaced with a call to that trampoline.
Instead of 20+ register save I save only arguments of that kernel function.
For example the trampoline that attaches to kfree_skb() will save only two registers
(rbp and rdi on x86) and will use 16 bytes of stack.

2.
The common ftrace callback api allows ftrace infra to use generic ftrace_ops_list_func()
that works for all ftracers, but it doesn't scale.
We see different teams writing bpf services that attach to the same function.
In addition there are 30+ kprobes active in other places, so for every
fentry the ftrace_ops_list_func() walks long linked list and does hash
lookup for each. That's not efficient and we see this slowdown in practice.
Because of unique trampoline for every kernel function single
generic list caller is not possible.
Instead generated unique trampoline handles all attached bpf program
for this particular kernel function in a sequence of calls.
No link lists to walk, no hash tables to lookup.
All overhead is gone.

3.
The existing kprobe bpf progs are using pt_regs to read arguments. Due to
that ugliness all of them are written for single architecture (x86-64).
Porting them to arm64 is not that hard, but porting to 32-bit arch is close
to impossible. With custom generated trampoline we'll have bpf progs that
work as-is on all archs. raw_tracepoint bpf progs already demonstrated
that such portability is possible. This new kprobe++ progs will be similar.

4.
Due to uniqueness of bpf trampoline sharing trampoline between ftracers
and bpf progs is not possible, so users would have to choose whether to
ftrace that particular kernel function or attach bpf to it.
Attach is not going to stomp on each other. I'm reusing ftrace_make_call/nop
approach that checks that its a 'nop' being replaced.

There probably will be some gotchas and unforeseen issues, since prototype
is very rough and not in reviewable form yet. Will share when it's ready.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ