[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-4eaffdd5a5fe6ff9f95e1ab4de1ac904d5e0fa8b@git.kernel.org>
Date: Thu, 14 Jan 2016 01:06:43 -0800
From: tip-bot for Andy Lutomirski <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: hpa@...or.com, riel@...hat.com, luto@...nel.org,
tglx@...utronix.de, luto@...capital.net, dvlasenk@...hat.com,
linux-kernel@...r.kernel.org, peterz@...radead.org,
torvalds@...ux-foundation.org, mingo@...nel.org, bp@...en8.de,
brgerst@...il.com, dave.hansen@...ux.intel.com
Subject: [tip:x86/urgent] x86/mm: Improve switch_mm() barrier comments
Commit-ID: 4eaffdd5a5fe6ff9f95e1ab4de1ac904d5e0fa8b
Gitweb: http://git.kernel.org/tip/4eaffdd5a5fe6ff9f95e1ab4de1ac904d5e0fa8b
Author: Andy Lutomirski <luto@...nel.org>
AuthorDate: Tue, 12 Jan 2016 12:47:40 -0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 13 Jan 2016 10:42:49 +0100
x86/mm: Improve switch_mm() barrier comments
My previous comments were still a bit confusing and there was a
typo. Fix it up.
Reported-by: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Andy Lutomirski <luto@...nel.org>
Cc: Andy Lutomirski <luto@...capital.net>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Brian Gerst <brgerst@...il.com>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Denys Vlasenko <dvlasenk@...hat.com>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Rik van Riel <riel@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: stable@...r.kernel.org
Fixes: 71b3c126e611 ("x86/mm: Add barriers and document switch_mm()-vs-flush synchronization")
Link: http://lkml.kernel.org/r/0a0b43cdcdd241c5faaaecfbcc91a155ddedc9a1.1452631609.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/include/asm/mmu_context.h | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 1edc9cd..bfd9b2a 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -132,14 +132,16 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
* be sent, and CPU 0's TLB will contain a stale entry.)
*
* The bad outcome can occur if either CPU's load is
- * reordered before that CPU's store, so both CPUs much
+ * reordered before that CPU's store, so both CPUs must
* execute full barriers to prevent this from happening.
*
* Thus, switch_mm needs a full barrier between the
* store to mm_cpumask and any operation that could load
- * from next->pgd. This barrier synchronizes with
- * remote TLB flushers. Fortunately, load_cr3 is
- * serializing and thus acts as a full barrier.
+ * from next->pgd. TLB fills are special and can happen
+ * due to instruction fetches or for no reason at all,
+ * and neither LOCK nor MFENCE orders them.
+ * Fortunately, load_cr3() is serializing and gives the
+ * ordering guarantee we need.
*
*/
load_cr3(next->pgd);
@@ -188,9 +190,8 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
* tlb flush IPI delivery. We must reload CR3
* to make sure to use no freed page tables.
*
- * As above, this is a barrier that forces
- * TLB repopulation to be ordered after the
- * store to mm_cpumask.
+ * As above, load_cr3() is serializing and orders TLB
+ * fills with respect to the mm_cpumask write.
*/
load_cr3(next->pgd);
trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
Powered by blists - more mailing lists