[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <DECA668C-B7EA-4663-8ABB-5B9E0495F498@amacapital.net>
Date: Thu, 12 Mar 2020 12:29:29 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Vince Weaver <vincent.weaver@...ne.edu>,
Dave Jones <dsj@...com>, Jann Horn <jannh@...gle.com>,
Miroslav Benes <mbenes@...e.cz>,
Andy Lutomirski <luto@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 03/14] x86/entry/64: Fix unwind hints in register clearing code
> On Mar 12, 2020, at 10:31 AM, Josh Poimboeuf <jpoimboe@...hat.com> wrote:
>
> The PUSH_AND_CLEAR_REGS macro zeroes each register immediately after
> pushing it. If an NMI or exception hits after a register is cleared,
> but before the UNWIND_HINT_REGS annotation, the ORC unwinder will
> wrongly think the previous value of the register was zero. This can
> confuse the unwinding process and cause it to exit early.
>
> Because ORC is simpler than DWARF, there are a limited number of unwind
> annotation states, so it's not possible to add an individual unwind hint
> after each push/clear combination. Instead, the register clearing
> instructions need to be consolidated and moved to after the
> UNWIND_HINT_REGS annotation.
I don’t suppose you know how bad t he performance hit is on a non-PTI machine?
>
> Fixes: 3f01daecd545 ("x86/entry/64: Introduce the PUSH_AND_CLEAN_REGS macro")
> Signed-off-by: Josh Poimboeuf <jpoimboe@...hat.com>
> ---
> arch/x86/entry/calling.h | 40 +++++++++++++++++++++-------------------
> 1 file changed, 21 insertions(+), 19 deletions(-)
>
> diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
> index 0789e13ece90..1c7f13bb6728 100644
> --- a/arch/x86/entry/calling.h
> +++ b/arch/x86/entry/calling.h
> @@ -98,13 +98,6 @@ For 32-bit we have the following conventions - kernel is built with
> #define SIZEOF_PTREGS 21*8
>
> .macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax save_ret=0
> - /*
> - * Push registers and sanitize registers of values that a
> - * speculation attack might otherwise want to exploit. The
> - * lower registers are likely clobbered well before they
> - * could be put to use in a speculative execution gadget.
> - * Interleave XOR with PUSH for better uop scheduling:
> - */
> .if \save_ret
> pushq %rsi /* pt_regs->si */
> movq 8(%rsp), %rsi /* temporarily store the return address in %rsi */
> @@ -114,34 +107,43 @@ For 32-bit we have the following conventions - kernel is built with
> pushq %rsi /* pt_regs->si */
> .endif
> pushq \rdx /* pt_regs->dx */
> - xorl %edx, %edx /* nospec dx */
> pushq %rcx /* pt_regs->cx */
> - xorl %ecx, %ecx /* nospec cx */
> pushq \rax /* pt_regs->ax */
> pushq %r8 /* pt_regs->r8 */
> - xorl %r8d, %r8d /* nospec r8 */
> pushq %r9 /* pt_regs->r9 */
> - xorl %r9d, %r9d /* nospec r9 */
> pushq %r10 /* pt_regs->r10 */
> - xorl %r10d, %r10d /* nospec r10 */
> pushq %r11 /* pt_regs->r11 */
> - xorl %r11d, %r11d /* nospec r11*/
> pushq %rbx /* pt_regs->rbx */
> - xorl %ebx, %ebx /* nospec rbx*/
> pushq %rbp /* pt_regs->rbp */
> - xorl %ebp, %ebp /* nospec rbp*/
> pushq %r12 /* pt_regs->r12 */
> - xorl %r12d, %r12d /* nospec r12*/
> pushq %r13 /* pt_regs->r13 */
> - xorl %r13d, %r13d /* nospec r13*/
> pushq %r14 /* pt_regs->r14 */
> - xorl %r14d, %r14d /* nospec r14*/
> pushq %r15 /* pt_regs->r15 */
> - xorl %r15d, %r15d /* nospec r15*/
> UNWIND_HINT_REGS
> +
> .if \save_ret
> pushq %rsi /* return address on top of stack */
> .endif
> +
> + /*
> + * Sanitize registers of values that a speculation attack might
> + * otherwise want to exploit. The lower registers are likely clobbered
> + * well before they could be put to use in a speculative execution
> + * gadget.
> + */
> + xorl %edx, %edx /* nospec dx */
> + xorl %ecx, %ecx /* nospec cx */
> + xorl %r8d, %r8d /* nospec r8 */
> + xorl %r9d, %r9d /* nospec r9 */
> + xorl %r10d, %r10d /* nospec r10 */
> + xorl %r11d, %r11d /* nospec r11 */
> + xorl %ebx, %ebx /* nospec rbx */
> + xorl %ebp, %ebp /* nospec rbp */
> + xorl %r12d, %r12d /* nospec r12 */
> + xorl %r13d, %r13d /* nospec r13 */
> + xorl %r14d, %r14d /* nospec r14 */
> + xorl %r15d, %r15d /* nospec r15 */
> +
> .endm
>
> .macro POP_REGS pop_rdi=1 skip_r11rcx=0
> --
> 2.21.1
>
Powered by blists - more mailing lists