[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK1hOcMa78F+pGA2Egp+L_wzn7QkTt-AfgyoYz740=zvM4w3WQ@mail.gmail.com>
Date: Tue, 24 Feb 2015 20:15:58 +0100
From: Denys Vlasenko <vda.linux@...glemail.com>
To: Andy Lutomirski <luto@...capital.net>
Cc: Oleg Nesterov <oleg@...hat.com>, Rik van Riel <riel@...hat.com>,
X86 ML <x86@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Borislav Petkov <bp@...en8.de>
Subject: Re: [RFC PATCH] x86, fpu: Use eagerfpu by default on all CPUs
On Fri, Feb 20, 2015 at 7:58 PM, Andy Lutomirski <luto@...capital.net> wrote:
> We have eager and lazy fpu modes, introduced in:
>
> 304bceda6a18 x86, fpu: use non-lazy fpu restore for processors supporting xsave
>
> The result is rather messy. There are two code paths in almost all of the
> FPU code, and only one of them (the eager case) is tested frequently, since
> most kernel developers have new enough hardware that we use eagerfpu.
>
> It seems that, on any remotely recent hardware, eagerfpu is a win:
> glibc uses SSE2, so laziness is probably overoptimistic, and, in any
> case, manipulating TS is far slower that saving and restoring the full
> state.
>
> To try to shake out any latent issues on old hardware, this changes
> the default to eager on all CPUs. If no performance or functionality
> problems show up, a subsequent patch could remove lazy mode entirely.
I'm a big fan of simplifying things, but.
SIMD registers were growing in x86, and they are going to grow again,
this time four-fold in Intel MIC:
from sixteen 256-bit registers to thirty two 512-bit registers.
That's 2 kbytes of data. Just moving this data out to/from memory
will take some time.
And some people talk about 1024-bit registers already...
Let's not completely remove lazy FPU saving code just yet.
Maybe we'll be forced to reinstate it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists