[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20170216194935.GF4515@naverao1-tp.localdomain>
Date: Fri, 17 Feb 2017 01:19:35 +0530
From: "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>
To: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Ananth N Mavinakayanahalli <ananth@...ux.vnet.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH 1/3] powerpc: kprobes: add support for KPROBES_ON_FTRACE
On 2017/02/15 04:11PM, Masami Hiramatsu wrote:
> Hi Naveen,
>
> On Wed, 15 Feb 2017 00:28:34 +0530
> "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com> wrote:
>
> > diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
> > index e51a045f3d3b..a8f414a0b141 100644
> > --- a/arch/powerpc/kernel/optprobes.c
> > +++ b/arch/powerpc/kernel/optprobes.c
> > @@ -70,6 +70,9 @@ static unsigned long can_optimize(struct kprobe *p)
> > struct instruction_op op;
> > unsigned long nip = 0;
> >
> > + if (unlikely(kprobe_ftrace(p)))
> > + return 0;
> > +
>
> Hmm, this should not be checked in arch-depend code, since it may duplicate
> code in each arch. Please try below.
Thanks, Masami!
>
> Thanks,
>
> commit 897bb304974c1b0c19a4274fea220b175b07f353
> Author: Masami Hiramatsu <mhiramat@...nel.org>
> Date: Wed Feb 15 15:48:14 2017 +0900
>
> kprobes: Skip preparing optprobe if the probe is ftrace-based
>
> Skip preparing optprobe if the probe is ftrace-based, since
> anyway, it must not be optimized (or already optimized by
> ftrace).
>
> Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
This works for me.
Tested-by: Naveen N. Rao <naveen.n.rao@...ux.vnet.ibm.com>
Regards,
Naveen
>
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index ebb4dad..546a8b1 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -746,13 +746,20 @@ static void kill_optimized_kprobe(struct kprobe *p)
> arch_remove_optimized_kprobe(op);
> }
>
> +static inline
> +void __prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
> +{
> + if (!kprobe_ftrace(p))
> + arch_prepare_optimized_kprobe(op, p);
> +}
> +
> /* Try to prepare optimized instructions */
> static void prepare_optimized_kprobe(struct kprobe *p)
> {
> struct optimized_kprobe *op;
>
> op = container_of(p, struct optimized_kprobe, kp);
> - arch_prepare_optimized_kprobe(op, p);
> + __prepare_optimized_kprobe(op, p);
> }
>
> /* Allocate new optimized_kprobe and try to prepare optimized instructions */
> @@ -766,7 +773,7 @@ static struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
>
> INIT_LIST_HEAD(&op->list);
> op->kp.addr = p->addr;
> - arch_prepare_optimized_kprobe(op, p);
> + __prepare_optimized_kprobe(op, p);
>
> return &op->kp;
> }
>
>
>
> --
> Masami Hiramatsu <mhiramat@...nel.org>
>
Powered by blists - more mailing lists