[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180728215357.3249-2-riel@surriel.com>
Date: Sat, 28 Jul 2018 17:53:48 -0400
From: Rik van Riel <riel@...riel.com>
To: linux-kernel@...r.kernel.org
Cc: kernel-team@...com, peterz@...radead.org, luto@...nel.org,
x86@...nel.org, vkuznets@...hat.com, mingo@...nel.org,
efault@....de, dave.hansen@...el.com, will.daecon@....com,
catalin.marinas@....com, benh@...nel.crashing.org,
Rik van Riel <riel@...riel.com>
Subject: [PATCH 01/10] x86,tlb: clarify memory barrier in switch_mm_irqs_off
Clarify exactly what the memory barrier synchronizes with.
Suggested-by: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Rik van Riel <riel@...riel.com>
---
arch/x86/mm/tlb.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 752dbf4e0e50..5321e02c4e09 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -263,8 +263,11 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
/*
* Read the tlb_gen to check whether a flush is needed.
* If the TLB is up to date, just use it.
- * The barrier synchronizes with the tlb_gen increment in
- * the TLB shootdown code.
+ * The TLB shootdown code first increments tlb_gen, and then
+ * sends IPIs to CPUs that have this CPU loaded and are not
+ * in lazy TLB mode. The barrier ensures we handle
+ * cpu_tlbstate.is_lazy before tlb_gen, keeping this code
+ * synchronized with the TLB flush code.
*/
smp_mb();
next_tlb_gen = atomic64_read(&next->context.tlb_gen);
--
2.14.4
Powered by blists - more mailing lists