[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140519231012.GF6311@pd.tnic>
Date: Tue, 20 May 2014 01:10:12 +0200
From: Borislav Petkov <bp@...en8.de>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Matt Fleming <matt@...sole-pimps.org>,
Ingo Molnar <mingo@...nel.org>, linux-efi@...r.kernel.org,
linux-kernel@...r.kernel.org, "Luck, Tony" <tony.luck@...el.com>
Subject: Re: [GIT PULL] EFI changes for v3.16
On Mon, May 19, 2014 at 03:47:31PM -0700, H. Peter Anvin wrote:
> > efi_call can happen in an irq context (pstore) and there we really need
> > to make sure we're not scribbling over FPU state while we've interrupted
> > a thread or kernel mode with a live FPU state. Therefore, use the
> > kernel_fpu_begin/end() variants which do that check.
>
> How on earth does this solve anything? The only thing we add here is a
> WARN_ON_ONCE()... but the above text already tells us we have a problem.
>
> It seems, rather, that we need to figure out how to deal with a pstore
> in this case. There are a few possibilities:
>
> 1. We could keep an XSAVE buffer area around for this particular use.
> I am *assuming* we don't let more than one CPU into EFI, because I
> cannot for my life imagine that this is safe in typical CPUs.
>
> 2. Drop the pstore on the floor if !irq_fpu_usable().
>
> 3. Allow the pstore, then die (on the assumption that we're dead
> anyway.)
>
> Comments?
The question is, why can't that pstore mumbo jumbo go and do its dance
in !irq context?
And how useful is the whole deal really, btw? I wanted to use it for
saving oopses into it, for example, but Tony said its write speed is
horribly low for that.
So why do we even bother with this thing and do the dance in irq context
for it? Is it worth it at all?
--
Regards/Gruss,
Boris.
Sent from a fat crate under my desk. Formatting is fine.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists