[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrWhUWjfdDS6eyB6PfrJLU8YvvrfkeeKFTo8moxq7L5t6A@mail.gmail.com>
Date: Fri, 29 Jan 2016 09:35:22 -0800
From: Andy Lutomirski <luto@...capital.net>
To: Borislav Petkov <bp@...en8.de>
Cc: Andy Lutomirski <luto@...nel.org>, X86 ML <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Brian Gerst <brgerst@...il.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Andrey Ryabinin <aryabinin@...tuozzo.com>
Subject: Re: [PATCH v2 3/3] x86/mm: If INVPCID is available, use it to flush
global mappings
On Fri, Jan 29, 2016 at 6:26 AM, Borislav Petkov <bp@...en8.de> wrote:
> On Mon, Jan 25, 2016 at 10:37:44AM -0800, Andy Lutomirski wrote:
>> On my Skylake laptop, INVPCID function 2 (flush absolutely
>> everything) takes about 376ns, whereas saving flags, twiddling
>> CR4.PGE to flush global mappings, and restoring flags takes about
>> 539ns.
>
> FWIW, I ran your microbenchmark on the IVB laptop I have here 3 times
> and some of the numbers from each run are pretty unstable. Not that it
> means a whole lot - the thing doesn't have INVPCID support.
>
> I'm just questioning the microbenchmark and whether we should be rather
> doing those measurements with a real benchmark, whatever that means. My
> limited experience says that measuring TLB performance is hard.
>
> ./context_switch_latency 0 thread same
> use_xstate = 0
> Using threads
> 1: 100000 iters at 2676.2 ns/switch
> 2: 100000 iters at 2700.2 ns/switch
> 3: 100000 iters at 2656.1 ns/switch
>
> ./context_switch_latency 0 thread different
> use_xstate = 0
> Using threads
> 1: 100000 iters at 5174.8 ns/switch
> 2: 100000 iters at 5140.5 ns/switch
> 3: 100000 iters at 5292.9 ns/switch
>
> ./context_switch_latency 0 process same
> use_xstate = 0
> Using a subprocess
> 1: 100000 iters at 2361.2 ns/switch
> 2: 100000 iters at 2332.2 ns/switch
> 3: 100000 iters at 3436.9 ns/switch
>
> ./context_switch_latency 0 process different
> use_xstate = 0
> Using a subprocess
> 1: 100000 iters at 4713.6 ns/switch
> 2: 100000 iters at 4957.5 ns/switch
> 3: 100000 iters at 5012.2 ns/switch
>
> ./context_switch_latency 1 thread same
> use_xstate = 1
> Using threads
> 1: 100000 iters at 2505.6 ns/switch
> 2: 100000 iters at 2483.1 ns/switch
> 3: 100000 iters at 2479.7 ns/switch
>
> ./context_switch_latency 1 thread different
> use_xstate = 1
> Using threads
> 1: 100000 iters at 5245.9 ns/switch
> 2: 100000 iters at 5241.1 ns/switch
> 3: 100000 iters at 5220.3 ns/switch
>
> ./context_switch_latency 1 process same
> use_xstate = 1
> Using a subprocess
> 1: 100000 iters at 2329.8 ns/switch
> 2: 100000 iters at 2350.2 ns/switch
> 3: 100000 iters at 2500.9 ns/switch
>
> ./context_switch_latency 1 process different
> use_xstate = 1
> Using a subprocess
> 1: 100000 iters at 4970.7 ns/switch
> 2: 100000 iters at 5034.0 ns/switch
> 3: 100000 iters at 4991.6 ns/switch
>
I'll fiddle with that benchmark a little bit. Maybe I can make it
suck less. If anyone knows a good non-micro benchmark for this, let
me know. I refuse to use dbus as my benchmark :)
FWIW, I benchmarked cr4 vs invpcid by adding a prctl and calling it in
a loop. If Ingo's fpu benchmark thing ever lands, I'll gladly send a
patch to add TLB flushes to it.
--Andy
Powered by blists - more mailing lists