[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1651129169.fpixr00hgx.naveen@linux.ibm.com>
Date: Thu, 28 Apr 2022 13:15:22 +0530
From: "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
llvm@...ts.linux.dev, Michael Ellerman <mpe@...erman.id.au>,
Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>
Subject: Re: [PATCH 2/2] recordmcount: Handle sections with no non-weak
symbols
Steven Rostedt wrote:
> On Wed, 27 Apr 2022 15:01:22 +0530
> "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com> wrote:
>
>> If one or both of these weak functions are overridden in future, in the
>> final vmlinux mcount table, references to these will change over to the
>> non-weak variant which has its own mcount location entry. As such, there
>> will now be two entries for these functions, both pointing to the same
>> non-weak location.
>
> But is that really true in all cases? x86 uses fentry these days, and other
> archs do things differently too. But the original mcount (-pg) call
> happened *after* the frame setup. That means the offset of the mcount call
> would be at different offsets wrt the start of the function. If you have
> one of these architectures that still use mcount, and the weak function
> doesn't have the same size frame setup as the overriding function, then the
> addresses will not be the same.
Indeed, plain old -pg will be a problem. I'm not sure there is a generic
way to address this. I suppose architectures will have to validate the
mcount locations, something like this?
diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index d83758acd1c7c3..d8b104ed2fdf38 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -12,13 +12,7 @@
#ifndef __ASSEMBLY__
extern void _mcount(void);
-
-static inline unsigned long ftrace_call_adjust(unsigned long addr)
-{
- /* relocation of mcount call site is the same as the address */
- return addr;
-}
-
+unsigned long ftrace_call_adjust(unsigned long addr);
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip,
unsigned long sp);
diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
index 4ee04aacf9f13c..976c08cd0573f7 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -858,6 +858,17 @@ void arch_ftrace_update_code(int command)
ftrace_modify_all_code(command);
}
+unsigned long ftrace_call_adjust(unsigned long addr)
+{
+ ppc_inst_t op = ppc_inst_read((u32 *)addr);
+
+ if (!is_bl_op(op))
+ return 0;
+
+ /* relocation of mcount call site is the same as the address */
+ return addr;
+}
+
#ifdef CONFIG_PPC64
#define PACATOC offsetof(struct paca_struct, kernel_toc)
We can tighten those checks as necessary, but it will be upto the
architectures to validate the mcount locations. This all will have to be
opt-in so that only architectures doing necessary validation will allow
mcount relocations against weak symbols.
- Naveen
Powered by blists - more mailing lists