[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210130110249.61fdad8f0cfe51a121c72302@kernel.org>
Date: Sat, 30 Jan 2021 11:02:49 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Alexei Starovoitov <alexei.starovoitov@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Nikolay Borisov <nborisov@...e.com>,
LKML <linux-kernel@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>, bpf <bpf@...r.kernel.org>,
Josh Poimboeuf <jpoimboe@...hat.com>
Subject: Re: kprobes broken since 0d00449c7a28
("x86: Replace ist_enter() with nmi_enter()")
On Fri, 29 Jan 2021 18:59:43 +0100
Peter Zijlstra <peterz@...radead.org> wrote:
> On Fri, Jan 29, 2021 at 09:45:48AM -0800, Alexei Starovoitov wrote:
> > Same things apply to bpf side. We can statically prove safety for
> > ftrace and kprobe attaching whereas to deal with NMI situation we
> > have to use run-time checks for recursion prevention, etc.
>
> I have no idea what you're saying. You can attach to functions that are
> called with random locks held, you can create kprobes in some very
> sensitive places.
>
> What can you staticlly prove about that?
For the bpf and the kprobe tracer, if a probe hits in the NMI context,
it can call the handler with another handler processing events.
kprobes is carefully avoiding the deadlock by checking recursion
with per-cpu variable. But if the handler is shared with the other events
like tracepoints, it needs to its own recursion cheker too.
So, Alexei, maybe you need something like this instead of in_nmi() check.
DEFINE_PER_CPU(bool, under_running_bpf);
common_handler()
{
if (__this_cpu_read(under_running_bpf))
return;
__this_cpu_write(under_running_bpf, true);
/* execute bpf prog */
__this_cpu_write(under_running_bpf, false);
}
Does this work for you?
Thank you,
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists