lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 18 Jun 2020 17:13:45 +0200
From:   Marco Elver <elver@...gle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Josh Poimboeuf <jpoimboe@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>,
        "the arch/x86 maintainers" <x86@...nel.org>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Andrey Konovalov <andreyknvl@...gle.com>,
        Mark Rutland <mark.rutland@....com>, mhelsley@...are.com,
        Steven Rostedt <rostedt@...dmis.org>, jthierry@...hat.com,
        mbenes@...e.cz
Subject: Re: [PATCH 3/7] x86/entry: Fixup bad_iret vs noinstr

On Thu, 18 Jun 2020 at 16:50, Peter Zijlstra <peterz@...radead.org> wrote:
>
> vmlinux.o: warning: objtool: fixup_bad_iret()+0x8e: call to memcpy() leaves .noinstr.text section
>
> Worse, when KASAN there is no telling what memcpy() actually is. Force
> the use of __memcpy() which is our assmebly implementation.
>
> Reported-by: Marco Elver <elver@...gle.com>
> Suggested-by: Marco Elver <elver@...gle.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>

KASAN no longer crashes, although the stack size increase appears to
be sufficient for the particular case I ran into.

Tested-by: Marco Elver <elver@...gle.com>

Thanks!

> ---
>  arch/x86/kernel/traps.c  |    6 +++---
>  arch/x86/lib/memcpy_64.S |    4 ++++
>  2 files changed, 7 insertions(+), 3 deletions(-)
>
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -685,13 +685,13 @@ struct bad_iret_stack *fixup_bad_iret(st
>                 (struct bad_iret_stack *)__this_cpu_read(cpu_tss_rw.x86_tss.sp0) - 1;
>
>         /* Copy the IRET target to the temporary storage. */
> -       memcpy(&tmp.regs.ip, (void *)s->regs.sp, 5*8);
> +       __memcpy(&tmp.regs.ip, (void *)s->regs.sp, 5*8);
>
>         /* Copy the remainder of the stack from the current stack. */
> -       memcpy(&tmp, s, offsetof(struct bad_iret_stack, regs.ip));
> +       __memcpy(&tmp, s, offsetof(struct bad_iret_stack, regs.ip));
>
>         /* Update the entry stack */
> -       memcpy(new_stack, &tmp, sizeof(tmp));
> +       __memcpy(new_stack, &tmp, sizeof(tmp));
>
>         BUG_ON(!user_mode(&new_stack->regs));
>         return new_stack;
> --- a/arch/x86/lib/memcpy_64.S
> +++ b/arch/x86/lib/memcpy_64.S
> @@ -8,6 +8,8 @@
>  #include <asm/alternative-asm.h>
>  #include <asm/export.h>
>
> +.pushsection .noinstr.text, "ax"
> +
>  /*
>   * We build a jump to memcpy_orig by default which gets NOPped out on
>   * the majority of x86 CPUs which set REP_GOOD. In addition, CPUs which
> @@ -184,6 +186,8 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
>         retq
>  SYM_FUNC_END(memcpy_orig)
>
> +.popsection
> +
>  #ifndef CONFIG_UML
>
>  MCSAFE_TEST_CTL
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ