[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191018172309.GB18838@lakrids.cambridge.arm.com>
Date: Fri, 18 Oct 2019 18:23:09 +0100
From: Mark Rutland <mark.rutland@....com>
To: Jann Horn <jannh@...gle.com>
Cc: Sami Tolvanen <samitolvanen@...gle.com>,
Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Dave Martin <Dave.Martin@....com>,
Kees Cook <keescook@...omium.org>,
Laura Abbott <labbott@...hat.com>,
Nick Desaulniers <ndesaulniers@...gle.com>,
clang-built-linux <clang-built-linux@...glegroups.com>,
Kernel Hardening <kernel-hardening@...ts.openwall.com>,
linux-arm-kernel@...ts.infradead.org,
kernel list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 18/18] arm64: implement Shadow Call Stack
On Fri, Oct 18, 2019 at 07:12:52PM +0200, Jann Horn wrote:
> On Fri, Oct 18, 2019 at 6:16 PM Sami Tolvanen <samitolvanen@...gle.com> wrote:
> > This change implements shadow stack switching, initial SCS set-up,
> > and interrupt shadow stacks for arm64.
> [...]
> > +static inline void scs_save(struct task_struct *tsk)
> > +{
> > + void *s;
> > +
> > + asm volatile("mov %0, x18" : "=r" (s));
> > + task_set_scs(tsk, s);
> > +}
> > +
> > +static inline void scs_load(struct task_struct *tsk)
> > +{
> > + asm volatile("mov x18, %0" : : "r" (task_scs(tsk)));
> > + task_set_scs(tsk, NULL);
> > +}
>
> These things should probably be __always_inline or something like
> that? If the compiler decides not to inline them (e.g. when called
> from scs_thread_switch()), stuff will blow up, right?
I think scs_save() would better live in assembly in cpu_switch_to(),
where we switch the stack and current. It shouldn't matter whether
scs_load() is inlined or not, since the x18 value _should_ be invariant
from the PoV of the task.
We just need to add a TSK_TI_SCS to asm-offsets.c, and then insert a
single LDR at the end:
mov sp, x9
msr sp_el0, x1
#ifdef CONFIG_SHADOW_CALL_STACK
ldr x18, [x1, TSK_TI_SCS]
#endif
ret
Thanks,
Mark.
Powered by blists - more mailing lists