lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrV01t-4gya0WEY0=R7XvuDA4dkf_pssfPZKDm9=1fCBmg@mail.gmail.com>
Date:   Thu, 4 Oct 2018 09:14:33 -0700
From:   Andy Lutomirski <luto@...nel.org>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     LKML <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>,
        Andrew Lutomirski <luto@...nel.org>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krcmar <rkrcmar@...hat.com>,
        kvm list <kvm@...r.kernel.org>,
        "Jason A. Donenfeld" <Jason@...c4.com>,
        Rik van Riel <riel@...riel.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH 11/11] x86/fpu: defer FPU state load until return to userspace

On Thu, Oct 4, 2018 at 7:06 AM Sebastian Andrzej Siewior
<bigeasy@...utronix.de> wrote:
>
> From: Rik van Riel <riel@...riel.com>
>
> Defer loading of FPU state until return to userspace. This gives
> the kernel the potential to skip loading FPU state for tasks that
> stay in kernel mode, or for tasks that end up with repeated
> invocations of kernel_fpu_begin.
>
> It also increases the chances that a task's FPU state will remain
> valid in the FPU registers until it is scheduled back in, allowing
> us to skip restoring that task's FPU state altogether.
>
> The __fpregs_changes_{begin|end}() section ensures that the register
> remain unchanged. Otherwise a context switch or a BH could save the
> registers to its FPU context and processor's FPU register would remain
> random.
> fpu__restore() has one user so I pulled that preempt_disable() part into
> fpu__restore(). While the function did *load* the registers, it now just
> makes sure that they are loaded on return to userland.
>
> KVM swaps the host/guest register on enry/exit path. I kept the flow as
> is. First it ensures that the registers are loaded and then saves the
> current (host) state before it loads the guest's register. Before
> entring the guest, it ensures that the register are still loaded.
>
> Signed-off-by: Rik van Riel <riel@...riel.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> ---
>  arch/x86/entry/common.c             |   9 +++
>  arch/x86/include/asm/fpu/api.h      |  11 +++
>  arch/x86/include/asm/fpu/internal.h |  25 ++++---
>  arch/x86/include/asm/trace/fpu.h    |   5 +-
>  arch/x86/kernel/fpu/core.c          | 108 ++++++++++++++++++++--------
>  arch/x86/kernel/fpu/signal.c        |   3 -
>  arch/x86/kernel/process.c           |   2 +-
>  arch/x86/kernel/process_32.c        |   7 +-
>  arch/x86/kernel/process_64.c        |   7 +-
>  arch/x86/kvm/x86.c                  |  18 +++--
>  10 files changed, 143 insertions(+), 52 deletions(-)
>
> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> index 3b2490b819181..3dad5c3b335eb 100644
> --- a/arch/x86/entry/common.c
> +++ b/arch/x86/entry/common.c
> @@ -31,6 +31,7 @@
>  #include <asm/vdso.h>
>  #include <linux/uaccess.h>
>  #include <asm/cpufeature.h>
> +#include <asm/fpu/api.h>
>
>  #define CREATE_TRACE_POINTS
>  #include <trace/events/syscalls.h>
> @@ -196,6 +197,14 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
>         if (unlikely(cached_flags & EXIT_TO_USERMODE_LOOP_FLAGS))
>                 exit_to_usermode_loop(regs, cached_flags);
>
> +       /* Reload ti->flags; we may have rescheduled above. */
> +       cached_flags = READ_ONCE(ti->flags);
> +
> +       if (unlikely(cached_flags & _TIF_LOAD_FPU))
> +               switch_fpu_return();
> +       else
> +               fpregs_is_state_consistent();

Shouldn't this be:

fpregs_assert_state_consistent();  /* see below */

if (unlikely(cached_flags & _TIF_LOAD_FPU))
  switch_fpu_return();

> diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h
> index a9caac9d4a729..e3077860f7333 100644
> --- a/arch/x86/include/asm/fpu/api.h
> +++ b/arch/x86/include/asm/fpu/api.h
> @@ -27,6 +27,17 @@ extern void kernel_fpu_begin(void);
>  extern void kernel_fpu_end(void);
>  extern bool irq_fpu_usable(void);
>
> +#ifdef CONFIG_X86_DEBUG_FPU
> +extern void fpregs_is_state_consistent(void);
> +#else
> +static inline void fpregs_is_state_consistent(void) { }
> +#endif

Can you name this something like fpregs_assert_state_consistent()?
The "is" name makes it sound like it's:

bool fpregs_is_state_consistent();

and you're supposed to do:

WARN_ON(!fpregs_is_state_consistent());

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ