[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YhadiVbwao/p2N7o@lt-gp.iram.es>
Date: Wed, 23 Feb 2022 21:48:09 +0100
From: Gabriel Paubert <paubert@...m.es>
To: Christophe Leroy <christophe.leroy@...roup.eu>
Cc: Kees Cook <keescook@...omium.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] powerpc/32: Clear volatile regs on syscall exit
On Wed, Feb 23, 2022 at 06:11:36PM +0100, Christophe Leroy wrote:
> Commit a82adfd5c7cb ("hardening: Introduce CONFIG_ZERO_CALL_USED_REGS")
> added zeroing of used registers at function exit.
>
> At the time being, PPC64 clears volatile registers on syscall exit but
> PPC32 doesn't do it for performance reason.
>
> Add that clearing in PPC32 syscall exit as well, but only when
> CONFIG_ZERO_CALL_USED_REGS is selected.
>
> On an 8xx, the null_syscall selftest gives:
> - Without CONFIG_ZERO_CALL_USED_REGS : 288 cycles
> - With CONFIG_ZERO_CALL_USED_REGS : 305 cycles
> - With CONFIG_ZERO_CALL_USED_REGS + this patch : 319 cycles
>
> Note that (independent of this patch), with pmac32_defconfig,
> vmlinux size is as follows with/without CONFIG_ZERO_CALL_USED_REGS:
>
> text data bss dec hex filename
> 9578869 2525210 194400 12298479 bba8ef vmlinux.without
> 10318045 2525210 194400 13037655 c6f057 vmlinux.with
>
> That is a 7.7% increase on text size, 6.0% on overall size.
>
> Signed-off-by: Christophe Leroy <christophe.leroy@...roup.eu>
> ---
> arch/powerpc/kernel/entry_32.S | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
> index 7748c278d13c..199f23092c02 100644
> --- a/arch/powerpc/kernel/entry_32.S
> +++ b/arch/powerpc/kernel/entry_32.S
> @@ -151,6 +151,21 @@ syscall_exit_finish:
> bne 3f
> mtcr r5
>
> +#ifdef CONFIG_ZERO_CALL_USED_REGS
> + /* Zero volatile regs that may contain sensitive kernel data */
> + li r0,0
> + li r4,0
> + li r5,0
> + li r6,0
> + li r7,0
> + li r8,0
> + li r9,0
> + li r10,0
> + li r11,0
> + li r12,0
> + mtctr r0
> + mtxer r0
Here, I'm almost sure that on some processors, it would be better to
separate mtctr form mtxer. mtxer is typically very expensive (pipeline
flush) but I don't know what's the best ordering for the average core.
And what about lr? Should it also be cleared?
Gabriel
> +#endif
> 1: lwz r2,GPR2(r1)
> lwz r1,GPR1(r1)
> rfi
> --
> 2.34.1
>
Powered by blists - more mailing lists