lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 12 Jan 2016 12:11:10 -0800
From:	Andy Lutomirski <luto@...nel.org>
To:	peterz@...radead.org, x86@...nel.org
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	Rik van Riel <riel@...hat.com>,
	Brian Gerst <brgerst@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Borislav Petkov <bp@...en8.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	"linux-tip-commits@...r.kernel.org" 
	<linux-tip-commits@...r.kernel.org>,
	Andy Lutomirski <luto@...nel.org>, stable@...r.kernel.org
Subject: [PATCH] x86/mm: Improve switch_mm barrier comments

My previous comments were still a bit confusing and there was a
typo.  Fix it up.

Reported-by: Peter Zijlstra <peterz@...radead.org>
Fixes: 71b3c126e611 ("x86/mm: Add barriers and document switch_mm()-vs-flush synchronization")
Cc: stable@...r.kernel.org
Signed-off-by: Andy Lutomirski <luto@...nel.org>
---
 arch/x86/include/asm/mmu_context.h | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 1edc9cd198b8..e08d27369fce 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -132,14 +132,14 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 		 * be sent, and CPU 0's TLB will contain a stale entry.)
 		 *
 		 * The bad outcome can occur if either CPU's load is
-		 * reordered before that CPU's store, so both CPUs much
+		 * reordered before that CPU's store, so both CPUs must
 		 * execute full barriers to prevent this from happening.
 		 *
 		 * Thus, switch_mm needs a full barrier between the
 		 * store to mm_cpumask and any operation that could load
 		 * from next->pgd.  This barrier synchronizes with
-		 * remote TLB flushers.  Fortunately, load_cr3 is
-		 * serializing and thus acts as a full barrier.
+		 * remote TLB flushers.  Fortunately, cpumask_set_cpu is
+		 * (on x86) a full barrier, and load_cr3 is serializing.
 		 *
 		 */
 		load_cr3(next->pgd);
@@ -188,9 +188,10 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 			 * tlb flush IPI delivery. We must reload CR3
 			 * to make sure to use no freed page tables.
 			 *
-			 * As above, this is a barrier that forces
-			 * TLB repopulation to be ordered after the
-			 * store to mm_cpumask.
+			 * As above, either of cpumask_set_cpu and
+			 * load_cr3 is a sufficient barrier to force TLB
+			 * repopulation to be ordered after the store to
+			 * mm_cpumask.
 			 */
 			load_cr3(next->pgd);
 			trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
-- 
2.5.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ