[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrWoDEXzH=y9sqoGWtzLPoeWgpRyUQw5AiCubX4O+UOa3w@mail.gmail.com>
Date: Fri, 15 Jun 2018 11:53:39 -0700
From: Andy Lutomirski <luto@...nel.org>
To: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Andrew Lutomirski <luto@...nel.org>,
"Jason A. Donenfeld" <Jason@...c4.com>,
Rik van Riel <riel@...riel.com>,
LKML <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>
Subject: Re: Lazy FPU restoration / moving kernel_fpu_end() to context switch
On Fri, Jun 15, 2018 at 11:50 AM Dave Hansen
<dave.hansen@...ux.intel.com> wrote:
>
> On 06/15/2018 11:31 AM, Andy Lutomirski wrote:
> > for (thing) {
> > kernel_fpu_begin();
> > encrypt(thing);
> > kernel_fpu_end();
> > }
>
> Don't forget that the processor has optimizations for this, too. The
> "modified optimization" will notice that between:
>
> kernel_fpu_end(); -> XRSTOR
> and
> kernel_fpu_start(); -> XSAVE(S|OPT)
>
> the processor has not modified the states. It'll skip doing any writes
> of the state. Doing what Andy is describing is still way better than
> letting the processor do it, but you should just know up front that this
> may not be as much of a win as you would expect.
Even with the modified optimization, kernel_fpu_end() still needs to
reload the state that was trashed by the kernel FPU use. If the
kernel is using something like AVX512 state, then kernel_fpu_end()
will transfer an enormous amount of data no matter how clever the CPU
is. And I think I once measured XSAVEOPT taking a hundred cycles or
so even when RFBM==0, so it's not exactly super fast.
Powered by blists - more mailing lists