[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YXbC3NRWDDfsW6DG@hirez.programming.kicks-ass.net>
Date: Mon, 25 Oct 2021 16:44:44 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Ard Biesheuvel <ardb@...nel.org>
Cc: Frederic Weisbecker <frederic@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
James Morse <james.morse@....com>,
David Laight <David.Laight@...lab.com>,
Quentin Perret <qperret@...gle.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>
Subject: Re: [PATCH 2/4] arm64: implement support for static call trampolines
On Mon, Oct 25, 2021 at 04:19:16PM +0200, Peter Zijlstra wrote:
> On Mon, Oct 25, 2021 at 04:08:37PM +0200, Ard Biesheuvel wrote:
> > > Ooohh, but what if you go from !func to NOP.
> > >
> > > assuming:
> > >
> > > .literal = 0
> > > BTI C
> > > RET
> > >
> > > Then
> > >
> > > CPU0 CPU1
> > >
> > > [S] literal = func [I] NOP
> > > [S] insn[1] = NOP [L] x16 = literal (NULL)
> > > b x16
> > > *BANG*
> > >
> > > Is that possible? (total lack of memory ordering etc..)
> > >
> >
> > The CBZ will branch to the RET instruction if x16 == 0x0, so this
> > should not happen.
>
> Oooh, I missed that :/ I was about to suggest writing the address of a
> bare 'ret' trampoline instead of NULL into the literal.
Perhaps a little something like so.. Shaves 2 instructions off each
trampoline.
--- a/arch/arm64/include/asm/static_call.h
+++ b/arch/arm64/include/asm/static_call.h
@@ -11,9 +11,7 @@
" hint 34 /* BTI C */ \n" \
insn " \n" \
" ldr x16, 0b \n" \
- " cbz x16, 1f \n" \
" br x16 \n" \
- "1: ret \n" \
" .popsection \n")
#define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \
--- a/arch/arm64/kernel/patching.c
+++ b/arch/arm64/kernel/patching.c
@@ -90,6 +90,11 @@ int __kprobes aarch64_insn_write(void *a
return __aarch64_insn_write(addr, &i, AARCH64_INSN_SIZE);
}
+asm("__static_call_ret: \n"
+ " ret \n")
+
+extern void __static_call_ret(void);
+
void arch_static_call_transform(void *site, void *tramp, void *func, bool tail)
{
/*
@@ -97,9 +102,7 @@ void arch_static_call_transform(void *si
* 0x0 bti c <--- trampoline entry point
* 0x4 <branch or nop>
* 0x8 ldr x16, <literal>
- * 0xc cbz x16, 20
- * 0x10 br x16
- * 0x14 ret
+ * 0xc br x16
*/
struct {
u64 literal;
@@ -113,6 +116,7 @@ void arch_static_call_transform(void *si
insns.insn[0] = cpu_to_le32(insn);
if (!func) {
+ insns.literal = (unsigned long)&__static_call_ret;
insn = aarch64_insn_gen_branch_reg(AARCH64_INSN_REG_LR,
AARCH64_INSN_BRANCH_RETURN);
} else {
Powered by blists - more mailing lists