lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Oct 2016 20:06:06 -0400
From:   Rik van Riel <riel@...hat.com>
To:     Andy Lutomirski <luto@...capital.net>
Cc:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...en8.de>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Lutomirski <luto@...nel.org>,
        dave.hansen@...el.linux.com, Thomas Gleixner <tglx@...utronix.de>,
        "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH RFC 3/3] x86/fpu: defer FPU state load until return to
 userspace

On Mon, 2016-10-17 at 13:58 -0700, Andy Lutomirski wrote:
> On Mon, Oct 17, 2016 at 1:09 PM,  <riel@...hat.com> wrote:
> > 
> > From: Rik van Riel <riel@...hat.com>
> > 
> > Defer loading of FPU state until return to userspace. This gives
> > the kernel the potential to skip loading FPU state for tasks that
> > stay in kernel mode, or for tasks that end up with repeated
> > invocations of kernel_fpu_begin.

> >  #define CREATE_TRACE_POINTS
> >  #include <trace/events/syscalls.h>
> > @@ -189,6 +190,14 @@ __visible inline void
> > prepare_exit_to_usermode(struct pt_regs *regs)
> >         if (unlikely(cached_flags & EXIT_TO_USERMODE_LOOP_FLAGS))
> >                 exit_to_usermode_loop(regs, cached_flags);
> > 
> > +       /* Reload ti->flags; we may have rescheduled above. */
> > +       cached_flags = READ_ONCE(ti->flags);
> 
> Stick this bit in the "if" above, please.

Will do.

> But I still don't see how this can work correctly with PKRU.

OK, Andy and I talked on IRC, and we have some ideas on how
to fix & improve this series:

1) pin/unpin_fpregs_active to prevent leaking of other
   users' fpregs contents to userspace (patch 1)
2) eagerly switch PKRU state (only), at task switch time,
   if the incoming task has different protection keys from
   the outgoing task (somewhat unlikely), just like the
   KVM vcpu entry & exit code is already doing
3) remove stts from the KVM VMX code (Andy may get
   to this before me)
4) enhance __kernel_fpu_begin() to take an fpu argument,
   and let the caller (really just kvm_load_guest_fpu)
   know whether that fpu state is still present in the
   registers, allowing it to skip __copy_kernel_to_fpregs

-- 
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ