[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190525082203.6531-7-namit@vmware.com>
Date: Sat, 25 May 2019 01:22:03 -0700
From: Nadav Amit <namit@...are.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>
Cc: Borislav Petkov <bp@...en8.de>, linux-kernel@...r.kernel.org,
Nadav Amit <namit@...are.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>, x86@...nel.org
Subject: [RFC PATCH 6/6] x86/mm/tlb: Optimize local TLB flushes
While the updated smp infrastructure is capable of running a function on
a single local core, it is not optimized for this case. The multiple
function calls and the indirect branch introduce some overhead, making
local TLB flushes slower than they were before the recent changes.
Before calling the SMP infrastructure, check if only a local TLB flush
is needed to restore the lost performance in this common case. This
requires to check mm_cpumask() another time, but unless this mask is
updated very frequently, this should impact performance negatively.
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: x86@...nel.org
Signed-off-by: Nadav Amit <namit@...are.com>
---
arch/x86/mm/tlb.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 0ec2bfca7581..3f3f983e224e 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -823,8 +823,12 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
const struct flush_tlb_info *info)
{
int this_cpu = smp_processor_id();
+ bool flush_others = false;
- if (static_branch_likely(&flush_tlb_multi_enabled)) {
+ if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
+ flush_others = true;
+
+ if (static_branch_likely(&flush_tlb_multi_enabled) && flush_others) {
flush_tlb_multi(cpumask, info);
return;
}
@@ -836,7 +840,7 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
local_irq_enable();
}
- if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
+ if (flush_others)
flush_tlb_others(cpumask, info);
}
--
2.20.1
Powered by blists - more mailing lists