[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <538FB775.8070405@amacapital.net>
Date: Wed, 04 Jun 2014 17:19:01 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Borislav Petkov <bp@...e.de>, "H. Peter Anvin" <hpa@...or.com>
CC: linux-kernel@...r.kernel.org, mingo@...nel.org,
ricardo.neri-calderon@...ux.intel.com, tglx@...utronix.de,
matt.fleming@...el.com, linux-tip-commits@...r.kernel.org
Subject: Re: [tip:x86/efi] x86/efi: Check for unsafe dealing with FPU state
in irq ctxt
On 06/04/2014 03:49 PM, Borislav Petkov wrote:
> On Wed, Jun 04, 2014 at 03:17:30PM -0700, H. Peter Anvin wrote:
>> I seem to have lost track of this... does this actually solve
>> anything, or does it just mean we'll explode hard?
>
> Not that hard - it'll warn once only.
>
> AFAIR, the discussion stalled but we were going in the direction of not
> calling into efi from pstore in irq context.
The kernel_fpu_begin thing has annoyed me in the past. How bad would it
be to allocate some percpu space and just do a full save/restore when
kernel_fpu_begin happens in a context where it currently doesn't work?
I don't know how large the state is these days, but there must be some
limit to how deeply interrupts and exceptions can nest. For each IST
entry, there is a hard limit to how deeply they can nest (once for all
but debug and four times for debug IIRC), plus one NMI (nested ones
don't touch FPU). The most non-IST entries we can have must be bounded,
too.
Let's say there are at most 16 levels of nesting. 16 * state size *
cpus isn't that much.
Of course, code in interrupts that nests kernel_fpu_begin itself could
have a problem. But this can be solved with a little bit of trickery in
the entry code or something.
If we did this, then I think the x86 crypto code could get rid of all of
its ridiculous async code.
--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists