[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <22289b8e-bfdf-37cb-0234-2233a528ef61@arm.com>
Date: Wed, 6 Feb 2019 09:30:58 +0000
From: Julien Thierry <julien.thierry@....com>
To: Torsten Duwe <duwe@....de>, Mark Rutland <mark.rutland@....com>,
Will Deacon <will.deacon@....com>,
Catalin Marinas <catalin.marinas@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Arnd Bergmann <arnd@...db.de>,
AKASHI Takahiro <takahiro.akashi@...aro.org>,
Amit Daniel Kachhap <amit.kachhap@....com>
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
live-patching@...r.kernel.org
Subject: Re: [PATCH v7 2/3] arm64: implement ftrace with regs
On 06/02/2019 08:59, Julien Thierry wrote:
> Hi Torsten,
>
> On 18/01/2019 16:39, Torsten Duwe wrote:
>> Once gcc8 adds 2 NOPs at the beginning of each function, replace the
>> first NOP thus generated with a quick LR saver (move it to scratch reg
>> x9), so the 2nd replacement insn, the call to ftrace, does not clobber
>> the value. Ftrace will then generate the standard stack frames.
>>
>> Note that patchable-function-entry in GCC disables IPA-RA, which means
>> ABI register calling conventions are obeyed *and* scratch registers
>> such as x9 are available.
>>
>> Introduce and handle an ftrace_regs_trampoline for module PLTs, right
>> after ftrace_trampoline, and double the size of this special section.
>>
>> Signed-off-by: Torsten Duwe <duwe@...e.de>
>>
>> ---
>>
>> Mark, if you see your ftrace entry macro code being represented correctly
>> here, please add your sign-off, As I've initially copied it from your mail.
>>
>> ---
>> arch/arm64/include/asm/ftrace.h | 17 ++++-
>> arch/arm64/include/asm/module.h | 3
>> arch/arm64/kernel/entry-ftrace.S | 125 +++++++++++++++++++++++++++++++++++++--
>> arch/arm64/kernel/ftrace.c | 114 ++++++++++++++++++++++++++---------
>> arch/arm64/kernel/module-plts.c | 3
>> arch/arm64/kernel/module.c | 2
>> 6 files changed, 227 insertions(+), 37 deletions(-)
>
> [...]
>
>> --- a/arch/arm64/kernel/ftrace.c
>> +++ b/arch/arm64/kernel/ftrace.c
>> @@ -133,17 +163,45 @@ int ftrace_make_call(struct dyn_ftrace *
>> return ftrace_modify_code(pc, old, new, true);
>> }
>>
>> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
>> +int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
>> + unsigned long addr)
>> +{
>> + unsigned long pc = rec->ip + REC_IP_BRANCH_OFFSET;
>> + u32 old, new;
>> +
>> + old = aarch64_insn_gen_branch_imm(pc, old_addr, true);
>> + new = aarch64_insn_gen_branch_imm(pc, addr, true);
>> +
>> + return ftrace_modify_code(pc, old, new, true);
>> +}
>> +#endif
>> +
>> /*
>> * Turn off the call to ftrace_caller() in instrumented function
>> */
>> int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
>> unsigned long addr)
>> {
>> - unsigned long pc = rec->ip;
>> + unsigned long pc = rec->ip + REC_IP_BRANCH_OFFSET;
>
> Sorry to come back on this patch again, but I was looking at the ftrace
> code a bit, and I see that when processing the ftrace call locations,
> ftrace calls ftrace_call_adjust() on every ip registered as mcount
> caller (or in our case patchable entries). This ftrace_call_adjust() is
> arch specific, so I was thinking we could place the offset in here once
> and for all so we don't have to worry about it in the future.
>
> Also, I'm unsure whether it would be safe, but we could patch the "mov
> x9, lr" there as well. In theory, this would be called at init time
> (before secondary CPUs are brought up) and when loading a module (so I'd
> expect no-one is executing that code *yet*.
>
And if the above works, we could also define CC_HAVE_NOP_MCOUNT (when we
have the patchable entry) and then we just not have to worry about the
initial ftrace_make_nop() with MCOUNT_ADDR.
--
Julien Thierry
Powered by blists - more mailing lists