[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7807fc23-c6c9-b6a9-62ef-e34e8beefdea@bytedance.com>
Date: Tue, 22 Mar 2022 22:14:11 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: mark.rutland@....com, mingo@...hat.com, tglx@...utronix.de,
catalin.marinas@....com, will@...nel.org,
dave.hansen@...ux.intel.com, broonie@...nel.org, x86@...nel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
songmuchun@...edance.com, qirui.001@...edance.com
Subject: Re: [External] Re: [PATCH v3 3/3] arm64/ftrace: Make function graph
use ftrace directly
On 2022/3/22 9:41 下午, Steven Rostedt wrote:
> On Tue, 22 Mar 2022 20:48:00 +0800
> Chengming Zhou <zhouchengming@...edance.com> wrote:
>
>> Hello,
>>
>> ping... have any comments?
>
> Hi Chengming,
>
> BTW, if you don't hear back for a week, it's OK to send a ping. You don't
> need to wait a month. Usually, it's just that the maintainers have other
> priorities and will try to look at it when they get a chance, but then
> forget to do so :-/
Hi Steve, ok, I got it ;-)
>
>
>>
>> Thanks.
>>
>> On 2022/2/24 5:32 下午, Chengming Zhou wrote:
>>> As we do in commit 0c0593b45c9b ("x86/ftrace: Make function graph
>>> use ftrace directly"), we don't need special hook for graph tracer,
>>> but instead we use graph_ops:func function to install return_hooker.
>>>
>>> Since commit 3b23e4991fb6 ("arm64: implement ftrace with regs") add
>>> implementation for FTRACE_WITH_REGS on arm64, we can easily adopt
>>> the same cleanup on arm64. And this cleanup only changes the
>>> FTRACE_WITH_REGS implementation, so the mcount-based implementation
>>> is unaffected.
>>>
>>> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
>>> ---
>>> Changes in v3:
>>> - Add comments in ftrace_graph_func() as suggested by Steve.
>>>
>>> Changes in v2:
>>> - Remove FTRACE_WITH_REGS ftrace_graph_caller asm as suggested by Mark.
>>> ---
>>> arch/arm64/include/asm/ftrace.h | 7 +++++++
>>> arch/arm64/kernel/entry-ftrace.S | 17 -----------------
>>> arch/arm64/kernel/ftrace.c | 17 +++++++++++++++++
>>> 3 files changed, 24 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
>>> index 1494cfa8639b..dbc45a4157fa 100644
>>> --- a/arch/arm64/include/asm/ftrace.h
>>> +++ b/arch/arm64/include/asm/ftrace.h
>>> @@ -80,8 +80,15 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
>>>
>>> #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
>>> struct dyn_ftrace;
>>> +struct ftrace_ops;
>>> +struct ftrace_regs;
>>> +
>>> int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
>>> #define ftrace_init_nop ftrace_init_nop
>>> +
>>> +void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
>>> + struct ftrace_ops *op, struct ftrace_regs *fregs);
>>> +#define ftrace_graph_func ftrace_graph_func
>>> #endif
>>>
>>> #define ftrace_return_address(n) return_address(n)
>>> diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
>>> index e535480a4069..d42a205ef625 100644
>>> --- a/arch/arm64/kernel/entry-ftrace.S
>>> +++ b/arch/arm64/kernel/entry-ftrace.S
>>> @@ -97,12 +97,6 @@ SYM_CODE_START(ftrace_common)
>>> SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
>>> bl ftrace_stub
>>>
>>> -#ifdef CONFIG_FUNCTION_GRAPH_TRACER
>>> -SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
>>> - nop // If enabled, this will be replaced
>>> - // "b ftrace_graph_caller"
>>> -#endif
>>> -
>>> /*
>>> * At the callsite x0-x8 and x19-x30 were live. Any C code will have preserved
>>> * x19-x29 per the AAPCS, and we created frame records upon entry, so we need
>>> @@ -127,17 +121,6 @@ ftrace_common_return:
>>> ret x9
>>> SYM_CODE_END(ftrace_common)
>>>
>>> -#ifdef CONFIG_FUNCTION_GRAPH_TRACER
>>> -SYM_CODE_START(ftrace_graph_caller)
>>> - ldr x0, [sp, #S_PC]
>>> - sub x0, x0, #AARCH64_INSN_SIZE // ip (callsite's BL insn)
>>> - add x1, sp, #S_LR // parent_ip (callsite's LR)
>>> - ldr x2, [sp, #PT_REGS_SIZE] // parent fp (callsite's FP)
>>> - bl prepare_ftrace_return
>>> - b ftrace_common_return
>>> -SYM_CODE_END(ftrace_graph_caller)
>>> -#endif
>>> -
>>> #else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
>>>
>>> /*
>>> diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
>>> index 4506c4a90ac1..35eb7c9b5e53 100644
>>> --- a/arch/arm64/kernel/ftrace.c
>>> +++ b/arch/arm64/kernel/ftrace.c
>>> @@ -268,6 +268,22 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
>>> }
>>>
>>> #ifdef CONFIG_DYNAMIC_FTRACE
>>> +
>>> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
>
> Is there a case were we have DYNAMIC_FTRACE but not
> DYNAMIC_FTRACE_WITH_REGS?
Yes, when HAVE_DYNAMIC_FTRACE_WITH_REGS is not selected because of low gcc version.
>
>>> +void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
>>> + struct ftrace_ops *op, struct ftrace_regs *fregs)
>>> +{
>>> + /*
>>> + * Athough graph_ops doesn't have FTRACE_OPS_FL_SAVE_REGS set in flags,
>>> + * regs can't be NULL in DYNAMIC_FTRACE_WITH_REGS. By design, it should
>>> + * be fixed when DYNAMIC_FTRACE_WITH_ARGS is implemented.
>>> + */
>>> + struct pt_regs *regs = arch_ftrace_get_regs(fregs);
>>> + unsigned long *parent = (unsigned long *)&procedure_link_pointer(regs);
>>> +
>>> + prepare_ftrace_return(ip, parent, frame_pointer(regs));
>>> +}
>>> +#else
>
> You deleted ftrace_graph_caller above from entry-ftrace.S, if we can get
> here with some options, wouldn't that break the build?
The above ftrace_graph_caller deleted is only for CONFIG_DYNAMIC_FTRACE_WITH_REGS,
and I tried using a low gcc version that doesn't select HAVE_DYNAMIC_FTRACE_WITH_REGS,
it can build success.
Thanks.
>
> -- Steve
>
>
>>> /*
>>> * Turn on/off the call to ftrace_graph_caller() in ftrace_caller()
>>> * depending on @enable.
>>> @@ -297,5 +313,6 @@ int ftrace_disable_ftrace_graph_caller(void)
>>> {
>>> return ftrace_modify_graph_caller(false);
>>> }
>>> +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
>>> #endif /* CONFIG_DYNAMIC_FTRACE */
>>> #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
>
Powered by blists - more mailing lists