[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190107213439.GD5966@xps-13>
Date: Mon, 7 Jan 2019 22:34:39 +0100
From: Andrea Righi <righi.andrea@...il.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Masami Hiramatsu <mhiramat@...nel.org>,
Ingo Molnar <mingo@...hat.com>, peterz@...radead.org,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/2] kprobes: Fix kretprobe incorrect stacking order
problem
On Mon, Jan 07, 2019 at 04:28:33PM -0500, Steven Rostedt wrote:
> On Mon, 7 Jan 2019 22:19:04 +0100
> Andrea Righi <righi.andrea@...il.com> wrote:
>
> > > > If we put a kretprobe to raw_spin_lock_irqsave() it looks like
> > > > kretprobe is going to call kretprobe...
> > >
> > > Right, but we should be able to add some recursion protection to stop
> > > that. I have similar protection in the ftrace code.
> >
> > If we assume that __raw_spin_lock/unlock*() are always inlined a
>
> I wouldn't assume that.
>
> > possible way to prevent this recursion could be to use directly those
> > functions to do locking from the kretprobe trampoline.
> >
> > But I'm not sure if that's a safe assumption... if not I'll see if I can
> > find a better solution.
>
> All you need to do is have a per_cpu variable, where you just do:
>
> preempt_disable_notrace();
> if (this_cpu_read(kprobe_recursion))
> goto out;
> this_cpu_inc(kprobe_recursion);
> [...]
> this_cpu_dec(kprobe_recursion);
> out:
> preempt_enable_notrace();
>
> And then just ignore any kprobes that trigger while you are processing
> the current kprobe.
>
> Something like that. If you want (or if it already happens) replace
> preempt_disable() with local_irq_save().
Oh.. definitely much better. I'll work on that and send a new patch.
Thanks for the suggestion!
-Andrea
Powered by blists - more mailing lists