lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211026103655.GB30152@C02TD0UTHF1T.local>
Date:   Tue, 26 Oct 2021 11:36:55 +0100
From:   Mark Rutland <mark.rutland@....com>
To:     Ard Biesheuvel <ardb@...nel.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Frederic Weisbecker <frederic@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        James Morse <james.morse@....com>,
        David Laight <David.Laight@...lab.com>,
        Quentin Perret <qperret@...gle.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>
Subject: Re: [PATCH 2/4] arm64: implement support for static call trampolines

On Mon, Oct 25, 2021 at 05:10:24PM +0200, Ard Biesheuvel wrote:
> On Mon, 25 Oct 2021 at 17:05, Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > On Mon, Oct 25, 2021 at 04:55:17PM +0200, Ard Biesheuvel wrote:
> > > On Mon, 25 Oct 2021 at 16:47, Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > > > Perhaps a little something like so.. Shaves 2 instructions off each
> > > > trampoline.
> > > >
> > > > --- a/arch/arm64/include/asm/static_call.h
> > > > +++ b/arch/arm64/include/asm/static_call.h
> > > > @@ -11,9 +11,7 @@
> > > >             "   hint    34      /* BTI C */                             \n" \
> > > >                 insn "                                                  \n" \
> > > >             "   ldr     x16, 0b                                         \n" \
> > > > -           "   cbz     x16, 1f                                         \n" \
> > > >             "   br      x16                                             \n" \
> > > > -           "1: ret                                                     \n" \
> > > >             "   .popsection                                             \n")
> > > >
> > > >  #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func)                      \
> > > > --- a/arch/arm64/kernel/patching.c
> > > > +++ b/arch/arm64/kernel/patching.c
> > > > @@ -90,6 +90,11 @@ int __kprobes aarch64_insn_write(void *a
> > > >         return __aarch64_insn_write(addr, &i, AARCH64_INSN_SIZE);
> > > >  }
> > > >
> > > > +asm("__static_call_ret:                \n"
> > > > +    "  ret                     \n")
> > > > +
> > >
> > > This breaks BTI as it lacks the landing pad, and it will be called indirectly.
> >
> > Argh!
> >
> > > > +extern void __static_call_ret(void);
> > > > +
> > >
> > > Better to have an ordinary C function here (with consistent linkage),
> > > but we need to take the address in a way that works with Clang CFI.
> >
> > There is that.
> >
> > > As the two additional instructions are on an ice cold path anyway, I'm
> > > not sure this is an obvious improvement tbh.
> >
> > For me it's both simpler -- by virtue of being more consistent, and
> > smaller. So double win :-)
> >
> > That is; you're already relying on the literal being unconditionally
> > updated for the normal B foo -> NOP path, and having the RET -> NOP path
> > be handled differently is just confusing.
> >
> > At least, that's how I'm seeing it today...
> 
> Fair enough. I don't have a strong opinion either way, so I'll let
> some other arm64 folks chime in as well.

My preference overall is to keep the trampoline self-contained, and I'd
prefer to keep the RET inline in the trampoline rather than trying to
factor it out so that all the control-flow is clearly in one place.

So I'd prefer that we have the sequence as-is:

| 0:	.quad 0x0
| 	bti	c
| 	< insn >
| 	ldr	x16, 0b
| 	cbz	x16, 1f
| 	br	x16
| 1:	ret

If we knew these were only called with IRQs enabled (and so we can take
an IPI to generate a context synchronization event), we could patch
<insn> to a RET and point the literal back at the BTI, e.g.

| 0:	.quad 0x0
| 	bti	c
| 	< insn >
| 	ldr	x16, 0b
| 	br	x16

... but I'm pretty sure there are CPUs that will never re-fetch <insn>
in that case, and will get stuck in an infinite loop.

Thanks,
Mark.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ