[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180801100255.4278-2-riel@surriel.com>
Date: Wed, 1 Aug 2018 06:02:45 -0400
From: Rik van Riel <riel@...riel.com>
To: linux-kernel@...r.kernel.org
Cc: kernel-team@...com, mingo@...nel.org, peterz@...radead.org,
luto@...nel.org, x86@...nel.org, efault@....de,
dave.hansen@...el.com, Rik van Riel <riel@...riel.com>
Subject: [PATCH 01/11] x86,tlb: clarify memory barrier in switch_mm_irqs_off
Clarify exactly what the memory barrier synchronizes with.
Suggested-by: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Rik van Riel <riel@...riel.com>
Reviewed-by: Andy Lutomirski <luto@...nel.org>
---
arch/x86/mm/tlb.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 752dbf4e0e50..5321e02c4e09 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -263,8 +263,11 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
/*
* Read the tlb_gen to check whether a flush is needed.
* If the TLB is up to date, just use it.
- * The barrier synchronizes with the tlb_gen increment in
- * the TLB shootdown code.
+ * The TLB shootdown code first increments tlb_gen, and then
+ * sends IPIs to CPUs that have this CPU loaded and are not
+ * in lazy TLB mode. The barrier ensures we handle
+ * cpu_tlbstate.is_lazy before tlb_gen, keeping this code
+ * synchronized with the TLB flush code.
*/
smp_mb();
next_tlb_gen = atomic64_read(&next->context.tlb_gen);
--
2.14.4
Powered by blists - more mailing lists