[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190527095959.GV2623@hirez.programming.kicks-ass.net>
Date: Mon, 27 May 2019 11:59:59 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Nadav Amit <namit@...are.com>
Cc: Ingo Molnar <mingo@...hat.com>, Andy Lutomirski <luto@...nel.org>,
Borislav Petkov <bp@...en8.de>, linux-kernel@...r.kernel.org,
jgross@...e.com, kys@...rosoft.com, haiyangz@...rosoft.com,
sthemmin@...rosoft.com, sashal@...nel.org
Subject: Re: [RFC PATCH 0/6] x86/mm: Flush remote and local TLBs concurrently
On Sat, May 25, 2019 at 01:21:57AM -0700, Nadav Amit wrote:
> Currently, local and remote TLB flushes are not performed concurrently,
> which introduces unnecessary overhead - each INVLPG can take 100s of
> cycles. This patch-set allows TLB flushes to be run concurrently: first
> request the remote CPUs to initiate the flush, then run it locally, and
> finally wait for the remote CPUs to finish their work.
>
> The proposed changes should also improve the performance of other
> invocations of on_each_cpu(). Hopefully, no one has relied on the
> behavior of on_each_cpu() that functions were first executed remotely
> and only then locally.
>
> On my Haswell machine (bare-metal), running a TLB flush microbenchmark
> (MADV_DONTNEED/touch for a single page on one thread), takes the
> following time (ns):
>
> n_threads before after
> --------- ------ -----
> 1 661 663
> 2 1436 1225 (-14%)
> 4 1571 1421 (-10%)
>
> Note that since the benchmark also causes page-faults, the actual
> speedup of TLB shootdowns is actually greater. Also note the higher
> improvement in performance with 2 thread (a single remote TLB flush
> target). This seems to be a side-effect of holding synchronization
> data-structures (csd) off the stack, unlike the way it is currently done
> (in smp_call_function_single()).
>
> Patches 1-2 do small cleanup. Patches 3-5 actually implement the
> concurrent execution of TLB flushes. Patch 6 restores local TLB flushes
> performance, which was hurt by the optimization, to be as good as it was
> before these changes by introducing a fast-pass for this specific case.
I like; ideally we'll get Hyper-V and Xen sorted before the final
version and avoid having to introduce more PV crud and static keys for
that.
The Hyper-V implementation in particular is horrifically ugly, the Xen
one also doesn't win any prices, esp. that on-stack CPU mask needs to
go.
Looking at them, I'm not sure they can actually win anything from using
the new interface, but at least we can avoid making our PV crud uglier
than it has to be.
Powered by blists - more mailing lists