lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 27 Dec 2017 14:46:24 -0800
From:   Alexei Starovoitov <ast@...com>
To:     Masami Hiramatsu <mhiramat@...nel.org>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>
CC:     Josef Bacik <jbacik@...com>, <rostedt@...dmis.org>,
        <mingo@...hat.com>, <davem@...emloft.net>,
        <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <ast@...nel.org>, <kernel-team@...com>, <daniel@...earbox.net>,
        <linux-btrfs@...r.kernel.org>, <darrick.wong@...cle.com>,
        Josef Bacik <josef@...icpanda.com>,
        Akinobu Mita <akinobu.mita@...il.com>
Subject: Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error
 injectable event is on function entry

On 12/26/17 9:56 PM, Masami Hiramatsu wrote:
> On Tue, 26 Dec 2017 17:57:32 -0800
> Alexei Starovoitov <alexei.starovoitov@...il.com> wrote:
>
>> On Tue, Dec 26, 2017 at 04:46:59PM +0900, Masami Hiramatsu wrote:
>>> Check whether error injectable event is on function entry or not.
>>> Currently it checks the event is ftrace-based kprobes or not,
>>> but that is wrong. It should check if the event is on the entry
>>> of target function. Since error injection will override a function
>>> to just return with modified return value, that operation must
>>> be done before the target function starts making stackframe.
>>>
>>> As a side effect, bpf error injection is no need to depend on
>>> function-tracer. It can work with sw-breakpoint based kprobe
>>> events too.
>>>
>>> Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
>>> ---
>>>  kernel/trace/Kconfig        |    2 --
>>>  kernel/trace/bpf_trace.c    |    6 +++---
>>>  kernel/trace/trace_kprobe.c |    8 +++++---
>>>  kernel/trace/trace_probe.h  |   12 ++++++------
>>>  4 files changed, 14 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
>>> index ae3a2d519e50..6400e1bf97c5 100644
>>> --- a/kernel/trace/Kconfig
>>> +++ b/kernel/trace/Kconfig
>>> @@ -533,9 +533,7 @@ config FUNCTION_PROFILER
>>>  config BPF_KPROBE_OVERRIDE
>>>  	bool "Enable BPF programs to override a kprobed function"
>>>  	depends on BPF_EVENTS
>>> -	depends on KPROBES_ON_FTRACE
>>>  	depends on HAVE_KPROBE_OVERRIDE
>>> -	depends on DYNAMIC_FTRACE_WITH_REGS
>>>  	default n
>>>  	help
>>>  	 Allows BPF to override the execution of a probed function and
>>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>>> index f6d2327ecb59..d663660f8392 100644
>>> --- a/kernel/trace/bpf_trace.c
>>> +++ b/kernel/trace/bpf_trace.c
>>> @@ -800,11 +800,11 @@ int perf_event_attach_bpf_prog(struct perf_event *event,
>>>  	int ret = -EEXIST;
>>>
>>>  	/*
>>> -	 * Kprobe override only works for ftrace based kprobes, and only if they
>>> -	 * are on the opt-in list.
>>> +	 * Kprobe override only works if they are on the function entry,
>>> +	 * and only if they are on the opt-in list.
>>>  	 */
>>>  	if (prog->kprobe_override &&
>>> -	    (!trace_kprobe_ftrace(event->tp_event) ||
>>> +	    (!trace_kprobe_on_func_entry(event->tp_event) ||
>>>  	     !trace_kprobe_error_injectable(event->tp_event)))
>>>  		return -EINVAL;
>>>
>>> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
>>> index 91f4b57dab82..265e3e27e8dc 100644
>>> --- a/kernel/trace/trace_kprobe.c
>>> +++ b/kernel/trace/trace_kprobe.c
>>> @@ -88,13 +88,15 @@ static nokprobe_inline unsigned long trace_kprobe_nhit(struct trace_kprobe *tk)
>>>  	return nhit;
>>>  }
>>>
>>> -int trace_kprobe_ftrace(struct trace_event_call *call)
>>> +bool trace_kprobe_on_func_entry(struct trace_event_call *call)
>>>  {
>>>  	struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
>>> -	return kprobe_ftrace(&tk->rp.kp);
>>> +
>>> +	return kprobe_on_func_entry(tk->rp.kp.addr, tk->rp.kp.symbol_name,
>>> +				    tk->rp.kp.offset);
>>
>> That would be nice, but did you test this?
>
> Yes, because the jprobe, which was only official user of modifying execution
> path using kprobe, did same way to check. (and kretprobe also does it)
>
>> My understanding that kprobe will restore all regs and
>> here we need to override return ip _and_ value.
>
> yes, no problem. kprobe restore all regs from pt_regs, including regs->ip.
>
>> Could you add a patch with the test the way Josef did
>> or describe the steps to test this new mode?
>
> Would you mean below patch? If so, it should work without any change.
>
>  [PATCH v10 4/5] samples/bpf: add a test for bpf_override_return

yeah. I expect bpf_override_return test to work as-is.
I'm asking for the test for new functionality added by this patch.
In particular kprobe on func entry without ftrace.
How did you test it?
and how I can repeat the test?
I'm still not sure that it works correctly.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ