[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201106192513.80b330351c0cafd03134b0d1@kernel.org>
Date: Fri, 6 Nov 2020 19:25:13 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Steven Rostedt (VMware) <rostedt@...dmis.org>
Cc: linux-kernel@...r.kernel.org,
Masami Hiramatsu <mhiramat@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Jiri Kosina <jikos@...nel.org>,
Miroslav Benes <mbenes@...e.cz>,
Petr Mladek <pmladek@...e.com>, Guo Ren <guoren@...nel.org>,
"James E.J. Bottomley" <James.Bottomley@...senPartnership.com>,
Helge Deller <deller@....de>,
Michael Ellerman <mpe@...erman.id.au>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
"Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>,
Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>,
"David S. Miller" <davem@...emloft.net>,
linux-csky@...r.kernel.org, linux-parisc@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-s390@...r.kernel.org
Subject: Re: [PATCH 05/11 v3] kprobes/ftrace: Add recursion protection to
the ftrace callback
On Thu, 05 Nov 2020 21:32:40 -0500
Steven Rostedt (VMware) <rostedt@...dmis.org> wrote:
> From: "Steven Rostedt (VMware)" <rostedt@...dmis.org>
>
> If a ftrace callback does not supply its own recursion protection and
> does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
> make a helper trampoline to do so before calling the callback instead of
> just calling the callback directly.
>
> The default for ftrace_ops is going to change. It will expect that handlers
> provide their own recursion protection, unless its ftrace_ops states
> otherwise.
>
> Link: https://lkml.kernel.org/r/20201028115613.140212174@goodmis.org
>
Looks good to me.
Acked-by: Masami Hiramatsu <mhiramat@...nel.org>
Thank you!
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Masami Hiramatsu <mhiramat@...nel.org>
> Cc: Guo Ren <guoren@...nel.org>
> Cc: "James E.J. Bottomley" <James.Bottomley@...senPartnership.com>
> Cc: Helge Deller <deller@....de>
> Cc: Michael Ellerman <mpe@...erman.id.au>
> Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
> Cc: Paul Mackerras <paulus@...ba.org>
> Cc: Heiko Carstens <hca@...ux.ibm.com>
> Cc: Vasily Gorbik <gor@...ux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@...ibm.com>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Borislav Petkov <bp@...en8.de>
> Cc: x86@...nel.org
> Cc: "H. Peter Anvin" <hpa@...or.com>
> Cc: "Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>
> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>
> Cc: "David S. Miller" <davem@...emloft.net>
> Cc: linux-csky@...r.kernel.org
> Cc: linux-parisc@...r.kernel.org
> Cc: linuxppc-dev@...ts.ozlabs.org
> Cc: linux-s390@...r.kernel.org
> Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
> ---
>
> Changes since v2:
>
> - Move get_kprobe() into preempt disabled sections for various archs
>
>
> arch/csky/kernel/probes/ftrace.c | 12 ++++++++++--
> arch/parisc/kernel/ftrace.c | 16 +++++++++++++---
> arch/powerpc/kernel/kprobes-ftrace.c | 11 ++++++++++-
> arch/s390/kernel/ftrace.c | 16 +++++++++++++---
> arch/x86/kernel/kprobes/ftrace.c | 12 ++++++++++--
> 5 files changed, 56 insertions(+), 11 deletions(-)
>
> diff --git a/arch/csky/kernel/probes/ftrace.c b/arch/csky/kernel/probes/ftrace.c
> index 5264763d05be..5eb2604fdf71 100644
> --- a/arch/csky/kernel/probes/ftrace.c
> +++ b/arch/csky/kernel/probes/ftrace.c
> @@ -13,16 +13,21 @@ int arch_check_ftrace_location(struct kprobe *p)
> void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> struct ftrace_ops *ops, struct pt_regs *regs)
> {
> + int bit;
> bool lr_saver = false;
> struct kprobe *p;
> struct kprobe_ctlblk *kcb;
>
> - /* Preempt is disabled by ftrace */
> + bit = ftrace_test_recursion_trylock();
> + if (bit < 0)
> + return;
> +
> + preempt_disable_notrace();
> p = get_kprobe((kprobe_opcode_t *)ip);
> if (!p) {
> p = get_kprobe((kprobe_opcode_t *)(ip - MCOUNT_INSN_SIZE));
> if (unlikely(!p) || kprobe_disabled(p))
> - return;
> + goto out;
> lr_saver = true;
> }
>
> @@ -56,6 +61,9 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> */
> __this_cpu_write(current_kprobe, NULL);
> }
> +out:
> + preempt_enable_notrace();
> + ftrace_test_recursion_unlock(bit);
> }
> NOKPROBE_SYMBOL(kprobe_ftrace_handler);
>
> diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
> index 63e3ecb9da81..13d85042810a 100644
> --- a/arch/parisc/kernel/ftrace.c
> +++ b/arch/parisc/kernel/ftrace.c
> @@ -207,14 +207,21 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> struct ftrace_ops *ops, struct pt_regs *regs)
> {
> struct kprobe_ctlblk *kcb;
> - struct kprobe *p = get_kprobe((kprobe_opcode_t *)ip);
> + struct kprobe *p;
> + int bit;
>
> - if (unlikely(!p) || kprobe_disabled(p))
> + bit = ftrace_test_recursion_trylock();
> + if (bit < 0)
> return;
>
> + preempt_disable_notrace();
> + p = get_kprobe((kprobe_opcode_t *)ip);
> + if (unlikely(!p) || kprobe_disabled(p))
> + goto out;
> +
> if (kprobe_running()) {
> kprobes_inc_nmissed_count(p);
> - return;
> + goto out;
> }
>
> __this_cpu_write(current_kprobe, p);
> @@ -235,6 +242,9 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> }
> }
> __this_cpu_write(current_kprobe, NULL);
> +out:
> + preempt_enable_notrace();
> + ftrace_test_recursion_unlock(bit);
> }
> NOKPROBE_SYMBOL(kprobe_ftrace_handler);
>
> diff --git a/arch/powerpc/kernel/kprobes-ftrace.c b/arch/powerpc/kernel/kprobes-ftrace.c
> index 972cb28174b2..5df8d50c65ae 100644
> --- a/arch/powerpc/kernel/kprobes-ftrace.c
> +++ b/arch/powerpc/kernel/kprobes-ftrace.c
> @@ -18,10 +18,16 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
> {
> struct kprobe *p;
> struct kprobe_ctlblk *kcb;
> + int bit;
>
> + bit = ftrace_test_recursion_trylock();
> + if (bit < 0)
> + return;
> +
> + preempt_disable_notrace();
> p = get_kprobe((kprobe_opcode_t *)nip);
> if (unlikely(!p) || kprobe_disabled(p))
> - return;
> + goto out;
>
> kcb = get_kprobe_ctlblk();
> if (kprobe_running()) {
> @@ -52,6 +58,9 @@ void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
> */
> __this_cpu_write(current_kprobe, NULL);
> }
> +out:
> + preempt_enable_notrace();
> + ftrace_test_recursion_unlock(bit);
> }
> NOKPROBE_SYMBOL(kprobe_ftrace_handler);
>
> diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c
> index b388e87a08bf..8f31c726537a 100644
> --- a/arch/s390/kernel/ftrace.c
> +++ b/arch/s390/kernel/ftrace.c
> @@ -201,14 +201,21 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> struct ftrace_ops *ops, struct pt_regs *regs)
> {
> struct kprobe_ctlblk *kcb;
> - struct kprobe *p = get_kprobe((kprobe_opcode_t *)ip);
> + struct kprobe *p;
> + int bit;
>
> - if (unlikely(!p) || kprobe_disabled(p))
> + bit = ftrace_test_recursion_trylock();
> + if (bit < 0)
> return;
>
> + preempt_disable_notrace();
> + p = get_kprobe((kprobe_opcode_t *)ip);
> + if (unlikely(!p) || kprobe_disabled(p))
> + goto out;
> +
> if (kprobe_running()) {
> kprobes_inc_nmissed_count(p);
> - return;
> + goto out;
> }
>
> __this_cpu_write(current_kprobe, p);
> @@ -228,6 +235,9 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> }
> }
> __this_cpu_write(current_kprobe, NULL);
> +out:
> + preempt_enable_notrace();
> + ftrace_test_recursion_unlock(bit);
> }
> NOKPROBE_SYMBOL(kprobe_ftrace_handler);
>
> diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
> index 681a4b36e9bb..a40a6cdfcca3 100644
> --- a/arch/x86/kernel/kprobes/ftrace.c
> +++ b/arch/x86/kernel/kprobes/ftrace.c
> @@ -18,11 +18,16 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> {
> struct kprobe *p;
> struct kprobe_ctlblk *kcb;
> + int bit;
>
> - /* Preempt is disabled by ftrace */
> + bit = ftrace_test_recursion_trylock();
> + if (bit < 0)
> + return;
> +
> + preempt_disable_notrace();
> p = get_kprobe((kprobe_opcode_t *)ip);
> if (unlikely(!p) || kprobe_disabled(p))
> - return;
> + goto out;
>
> kcb = get_kprobe_ctlblk();
> if (kprobe_running()) {
> @@ -52,6 +57,9 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> */
> __this_cpu_write(current_kprobe, NULL);
> }
> +out:
> + preempt_enable_notrace();
> + ftrace_test_recursion_unlock(bit);
> }
> NOKPROBE_SYMBOL(kprobe_ftrace_handler);
>
> --
> 2.28.0
>
>
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists