lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <FA0D2929-63D0-4473-A492-42227D7A5D98@amacapital.net>
Date:   Tue, 7 Jan 2020 10:41:52 -1000
From:   Andy Lutomirski <luto@...capital.net>
To:     linux-kernel@...r.kernel.org
Cc:     linux-tip-commits@...r.kernel.org,
        Yu-cheng Yu <yu-cheng.yu@...el.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Borislav Petkov <bp@...e.de>,
        Andy Lutomirski <luto@...nel.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Fenghua Yu <fenghua.yu@...el.com>,
        "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
        Jann Horn <jannh@...gle.com>,
        Peter Zijlstra <peterz@...radead.org>,
        "Ravi V. Shankar" <ravi.v.shankar@...el.com>,
        Rik van Riel <riel@...riel.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Tony Luck <tony.luck@...el.com>, x86-ml <x86@...nel.org>
Subject: Re: [tip: x86/fpu] x86/fpu: Deactivate FPU state after failure during state load


> On Jan 7, 2020, at 2:52 AM, tip-bot2 for Sebastian Andrzej Siewior <tip-bot2@...utronix.de> wrote:
> 
> The following commit has been merged into the x86/fpu branch of tip:
> 
> Commit-ID:     bbc55341b9c67645d1a5471506370caf7dd4a203
> Gitweb:        https://git.kernel.org/tip/bbc55341b9c67645d1a5471506370caf7dd4a203
> Author:        Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> AuthorDate:    Fri, 20 Dec 2019 20:59:06 +01:00
> Committer:     Borislav Petkov <bp@...e.de>
> CommitterDate: Tue, 07 Jan 2020 13:44:42 +01:00
> 
> x86/fpu: Deactivate FPU state after failure during state load
> 
> In __fpu__restore_sig(), fpu_fpregs_owner_ctx needs to be reset if the
> FPU state was not fully restored. Otherwise the following may happen (on
> the same CPU):
> 
>  Task A                     Task B               fpu_fpregs_owner_ctx
>  *active*                                        A.fpu
>  __fpu__restore_sig()
>                             ctx switch           load B.fpu
>                             *active*             B.fpu
>  fpregs_lock()
>  copy_user_to_fpregs_zeroing()
>    copy_kernel_to_xregs() *modify*
>    copy_user_to_xregs() *fails*
>  fpregs_unlock()
>                            ctx switch            skip loading B.fpu,
>                            *active*              B.fpu
> 
> In the success case, fpu_fpregs_owner_ctx is set to the current task.
> 
> In the failure case, the FPU state might have been modified by loading
> the init state.
> 
> In this case, fpu_fpregs_owner_ctx needs to be reset in order to ensure
> that the FPU state of the following task is loaded from saved state (and
> not skipped because it was the previous state).
> 
> Reset fpu_fpregs_owner_ctx after a failure during restore occurred, to
> ensure that the FPU state for the next task is always loaded.
> 
> The problem was debugged-by Yu-cheng Yu <yu-cheng.yu@...el.com>.

Wow, __fpu__restore_sig is a mess. We have __copy_from... that is Obviously Incorrect (tm) even though it’s not obviously exploitable. (It’s wrong because the *wrong pointer* is checked with access_ok().). We have a fast path that will execute just enough of the time to make debugging the slow path really annoying. (We should probably delete the fast path.)  There are pagefault_disable() call in there mostly to confuse people. (So we take a fault and sleep — big deal.  We have temporarily corrupt state, but no one will ever read it.  The retry after sleeping will clobber xstate, but lazy save is long gone and this should be fine now.  The real issue is that, if we’re preempted after a successful a successful restore, then the new state will get lost.)

So either we should delete the fast path or we should make it work reliably and delete the slow path.  And we should get rid of the __copy. And we should have some test cases.

BTW, how was the bug in here discovered?  It looks like it only affects signal restore failure, which is usually not survivable unless the user program is really trying.

> 
> [ bp: Massage commit message. ]
> 
> Fixes: 5f409e20b7945 ("x86/fpu: Defer FPU state load until return to userspace")
> Reported-by: Yu-cheng Yu <yu-cheng.yu@...el.com>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> Signed-off-by: Borislav Petkov <bp@...e.de>
> Cc: Andy Lutomirski <luto@...nel.org>
> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
> Cc: Fenghua Yu <fenghua.yu@...el.com>
> Cc: "H. Peter Anvin" <hpa@...or.com>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Jann Horn <jannh@...gle.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: "Ravi V. Shankar" <ravi.v.shankar@...el.com>
> Cc: Rik van Riel <riel@...riel.com>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Tony Luck <tony.luck@...el.com>
> Cc: x86-ml <x86@...nel.org>
> Link: https://lkml.kernel.org/r/20191220195906.plk6kpmsrikvbcfn@linutronix.de
> ---
> arch/x86/kernel/fpu/signal.c | 3 +++
> 1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
> index 0071b79..400a05e 100644
> --- a/arch/x86/kernel/fpu/signal.c
> +++ b/arch/x86/kernel/fpu/signal.c
> @@ -352,6 +352,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
>            fpregs_unlock();
>            return 0;
>        }
> +        fpregs_deactivate(fpu);
>        fpregs_unlock();
>    }
> 
> @@ -403,6 +404,8 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
>    }
>    if (!ret)
>        fpregs_mark_activate();
> +    else
> +        fpregs_deactivate(fpu);
>    fpregs_unlock();
> 
> err_out:

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ