[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1394735618.26600.19.camel@pippen.local.home>
Date: Thu, 13 Mar 2014 14:33:38 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Will Deacon <will.deacon@....com>
Cc: AKASHI Takahiro <takahiro.akashi@...aro.org>,
"fweisbec@...il.com" <fweisbec@...il.com>,
"mingo@...hat.com" <mingo@...hat.com>,
Catalin Marinas <Catalin.Marinas@....com>,
"tim.bird@...ymobile.com" <tim.bird@...ymobile.com>,
"gkulkarni@...iumnetworks.com" <gkulkarni@...iumnetworks.com>,
"dsaxena@...aro.org" <dsaxena@...aro.org>,
"arndb@...db.de" <arndb@...db.de>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 5/7] arm64: ftrace: Add dynamic ftrace support
On Thu, 2014-03-13 at 18:10 +0000, Will Deacon wrote:
>
> > +#else /* CONFIG_DYNAMIC_FTRACE */
> > +/*
> > + * _mcount() is used to build the kernel with -pg option, but all the branch
> > + * instructions to _mcount() are replaced to NOP initially at kernel start up,
> > + * and later on, NOP to branch to ftrace_caller() when enabled or branch to
> > + * NOP when disabled per-function base.
> > + */
> > +ENTRY(_mcount)
> > + ret
> > +ENDPROC(_mcount)
>
> Judging by your comment then, this should never be called. Is that right? If
> so, we could add a BUG-equivalent so we know if we missed an mcount during
> patching.
Actually, it can be called before the change to nops are done in early
boot. This is done very early, but everything before ftrace_init() in
init/main.c can still call _mcount.
> > + /*
> > + * Note:
> > + * Due to modules and __init, code can disappear and change,
> > + * we need to protect against faulting as well as code changing.
> > + * We do this by aarch64_insn_*() which use the probe_kernel_*().
> > + *
> > + * No lock is held here because all the modifications are run
> > + * through stop_machine().
> > + */
> > + if (validate) {
> > + if (aarch64_insn_read((void *)pc, &replaced))
> > + return -EFAULT;
> > +
> > + if (replaced != old)
> > + return -EINVAL;
> > + }
> > + if (aarch64_insn_patch_text_nosync((void *)pc, new))
> > + return -EPERM;
>
> I think you're better off propagating the errors here, rather than
> overriding them with EFAULT/EINVAL/EPERM.
The ftrace generic code expects to see these specific errors. Look at
ftrace_bug() in kernel/trace/ftrace.c.
>
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * Replace tracer function in ftrace_caller()
> > + */
> > +int ftrace_update_ftrace_func(ftrace_func_t func)
> > +{
> > + unsigned long pc;
> > + unsigned int new;
> > +
> > + pc = (unsigned long)&ftrace_call;
> > + new = aarch64_insn_gen_branch_imm(pc, (unsigned long)func, true);
> > +
> > + return ftrace_modify_code(pc, 0, new, false);
> > +}
> > +
> > +/*
> > + * Turn on the call to ftrace_caller() in instrumented function
> > + */
> > +int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
> > +{
> > + unsigned long pc = rec->ip;
> > + unsigned int old, new;
> > +
> > + old = aarch64_insn_gen_nop();
> > + new = aarch64_insn_gen_branch_imm(pc, addr, true);
> > +
> > + return ftrace_modify_code(pc, old, new, true);
> > +}
> > +
> > +/*
> > + * Turn off the call to ftrace_caller() in instrumented function
> > + */
> > +int ftrace_make_nop(struct module *mod,
> > + struct dyn_ftrace *rec, unsigned long addr)
> > +{
> > + unsigned long pc = rec->ip;
> > + unsigned int old, new;
> > +
> > + old = aarch64_insn_gen_branch_imm(pc, addr, true);
> > + new = aarch64_insn_gen_nop();
> > +
> > + return ftrace_modify_code(pc, old, new, true);
> > +}
> > +
> > +int __init ftrace_dyn_arch_init(void *data)
> > +{
> > + *(unsigned long *)data = 0;
> > +
> > + return 0;
> > +}
> > +#endif /* CONFIG_DYNAMIC_FTRACE */
> > +
> > #ifdef CONFIG_FUNCTION_GRAPH_TRACER
> > /*
> > * function_graph tracer expects ftrace_return_to_handler() to be called
> > @@ -61,4 +144,34 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
> > return;
> > }
> > }
> > +
> > +#ifdef CONFIG_DYNAMIC_FTRACE
> > +/*
> > + * Turn on/off the call to ftrace_graph_caller() in ftrace_caller()
> > + * depending on @enable.
> > + */
> > +static int ftrace_modify_graph_caller(bool enable)
> > +{
> > + unsigned long pc = (unsigned long)&ftrace_graph_call;
> > + unsigned int branch, nop, old, new;
> > +
> > + branch = aarch64_insn_gen_branch_imm(pc,
> > + (unsigned long)ftrace_graph_caller, false);
> > + nop = aarch64_insn_gen_nop();
> > + old = enable ? nop : branch;
> > + new = enable ? branch : nop;
> > +
> > + return ftrace_modify_code(pc, old, new, true);
>
> You could rewrite this as:
>
> if (enable)
> return ftrace_modify_code(pc, nop, branch, true);
> else
> return ftrace_modify_code(pc, branch, nop, true);
>
> which I find easier to read.
Heh, maybe that could be updated in other archs too. I'll have to think
about that one.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists