[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e8bae974-190b-f247-0d89-6cea4fd4cc39@suse.com>
Date: Thu, 28 Jan 2021 18:12:32 +0200
From: Nikolay Borisov <nborisov@...e.com>
To: Masami Hiramatsu <masami.hiramatsu@...il.com>,
Masami Hiramatsu <mhiramat@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>, bpf@...r.kernel.org
Subject: Re: kprobes broken since 0d00449c7a28 ("x86: Replace ist_enter() with
nmi_enter()")
On 28.01.21 г. 5:38 ч., Masami Hiramatsu wrote:
> Hi,
<snip>
>
> Alexei, could you tell me what is the concerning situation for bpf?
Another data point masami is that this affects bpf kprobes which are
entered via int3, alternatively if the kprobe is entered via
kprobe_ftrace_handler it works as expected. I haven't been able to
determine why a particular bpf probe won't use ftrace's infrastructure
if it's put at the beginning of the function. An alternative call chain
is :
=> __ftrace_trace_stack
=> trace_call_bpf
=> kprobe_perf_func
=> kprobe_ftrace_handler
=> 0xffffffffc095d0c8
=> btrfs_validate_metadata_buffer
=> end_bio_extent_readpage
=> end_workqueue_fn
=> btrfs_work_helper
=> process_one_work
=> worker_thread
=> kthread
=> ret_from_fork
>
> Thank you,
>
> From c5cd0e5f60ef6494c9e1579ec1b82b7344c41f9a Mon Sep 17 00:00:00 2001
> From: Masami Hiramatsu <mhiramat@...nel.org>
> Date: Thu, 28 Jan 2021 12:31:02 +0900
> Subject: [PATCH] tracing: bpf: Remove in_nmi() check from kprobe handler
>
> Since commit 0d00449c7a28 ("x86: Replace ist_enter() with nmi_enter()") has
> changed the kprobe handler to run in the NMI context, in_nmi() always returns
> true. This means the bpf events on kprobes always skipped.
>
> Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
> ---
> kernel/trace/bpf_trace.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 6c0018abe68a..764400260eb6 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -96,9 +96,6 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
> {
> unsigned int ret;
>
> - if (in_nmi()) /* not supported yet */
> - return 1;
> -
> cant_sleep();
>
> if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
>
Powered by blists - more mailing lists