[<prev] [next>] [day] [month] [year] [list]
Message-ID: <YEirSryc3X3Lpf44@hirez.programming.kicks-ass.net>
Date: Wed, 10 Mar 2021 12:19:38 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: x86@...nel.org, Josh Poimboeuf <jpoimboe@...hat.com>
Cc: jgross@...e.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] objtool,x86: Fix uaccess PUSHF/POPF validation
Seems like I forgot LKML (again!)...
On Mon, Mar 08, 2021 at 03:46:04PM +0100, Peter Zijlstra wrote:
>
> Commit ab234a260b1f ("x86/pv: Rework arch_local_irq_restore() to not
> use popf") replaced "push %reg; popf" with something like: "test
> $0x200, %reg; jz 1f; sti; 1:", which breaks the pushf/popf symmetry
> that commit ea24213d8088 ("objtool: Add UACCESS validation") relies
> on.
>
> The result is:
>
> drivers/gpu/drm/amd/amdgpu/si.o: warning: objtool: si_common_hw_init()+0xf36: PUSHF stack exhausted
>
> Meanwhile, commit c9c324dc22aa ("objtool: Support stack layout changes
> in alternatives") makes that we can actually use stack-ops in
> alternatives, which means we can revert 1ff865e343c2 ("x86,smap: Fix
> smap_{save,restore}() alternatives").
>
> That in turn means we can limit the PUSHF/POPF handling of
> ea24213d8088 to those instructions that are in alternatives.
>
> Fixes: ab234a260b1f ("x86/pv: Rework arch_local_irq_restore() to not use popf")
> Reported-by: Borislav Petkov <bp@...en8.de>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> arch/x86/include/asm/smap.h | 10 ++++------
> tools/objtool/check.c | 3 +++
> 2 files changed, 7 insertions(+), 6 deletions(-)
>
> --- a/arch/x86/include/asm/smap.h
> +++ b/arch/x86/include/asm/smap.h
> @@ -58,9 +58,8 @@ static __always_inline unsigned long sma
> unsigned long flags;
>
> asm volatile ("# smap_save\n\t"
> - ALTERNATIVE("jmp 1f", "", X86_FEATURE_SMAP)
> - "pushf; pop %0; " __ASM_CLAC "\n\t"
> - "1:"
> + ALTERNATIVE("", "pushf; pop %0; " __ASM_CLAC "\n\t",
> + X86_FEATURE_SMAP)
> : "=rm" (flags) : : "memory", "cc");
>
> return flags;
> @@ -69,9 +68,8 @@ static __always_inline unsigned long sma
> static __always_inline void smap_restore(unsigned long flags)
> {
> asm volatile ("# smap_restore\n\t"
> - ALTERNATIVE("jmp 1f", "", X86_FEATURE_SMAP)
> - "push %0; popf\n\t"
> - "1:"
> + ALTERNATIVE("", "push %0; popf\n\t",
> + X86_FEATURE_SMAP)
> : : "g" (flags) : "memory", "cc");
> }
>
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -2425,6 +2425,9 @@ static int handle_insn_ops(struct instru
> if (update_cfi_state(insn, next_insn, &state->cfi, op))
> return 1;
>
> + if (!insn->alt_group)
> + continue;
> +
> if (op->dest.type == OP_DEST_PUSHF) {
> if (!state->uaccess_stack) {
> state->uaccess_stack = 1;
Powered by blists - more mailing lists