[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090421224951.GD24073@hera.kernel.org>
Date: Tue, 21 Apr 2009 22:49:51 +0000
From: Chris Wright <chrisw@...s-sol.org>
To: Suresh Siddha <suresh.b.siddha@...el.com>
Cc: hpa@...ux.intel.com, mingo@...e.hu, tglx@...utronix.de,
linux-kernel@...r.kernel.org, stable@...nel.org
Subject: Re: [stable] [patch] x64: fix FPU corruption with signals and
preemption
* Suresh Siddha (suresh.b.siddha@...el.com) wrote:
> From: Suresh Siddha <suresh.b.siddha@...el.com>
> Subject: x64: fix FPU corruption with signals and preemption
>
> Impact: fix FPU state corruption
>
> In 64bit signal delivery path, clear_used_math() was happening before saving
> the current active FPU state on to the user stack for signal handling. Between
> clear_used_math() and the state store on to the user stack, potentially we
> can get a page fault for the user address and can block. Infact, while testing
> we were hitting the might_fault() in __clear_user() which can do a schedule().
>
> At a later point in time, we will schedule back into this process and
> resume the save state (using "xsave/fxsave" instruction) which can lead
> to DNA fault. And as used_math was cleared before, we will reinit the FP state
> in the DNA fault and continue. This reinit will result in loosing the
> FPU state of the process.
>
> Move clear_used_math() to a point after the FPU state has been stored
> onto the user stack.
>
> This issue is present from a long time (even before the xsave changes
> and the x86 merge). But it can easily be exposed in 2.6.28.x and 2.6.29.x
> series because of the __clear_user() in this path, which has an explicit
> __cond_resched() leading to a context switch with CONFIG_PREEMPT_VOLUNTARY.
>
> Signed-off-by: Suresh Siddha <suresh.b.siddha@...el.com>
> Cc: stable@...nel.org [2.6.28.x, 2.6.29.x]
This one get lost?
thanks,
-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists