lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080116221521.BA00F150C8@wotan.suse.de>
Date:	Wed, 16 Jan 2008 23:15:21 +0100 (CET)
From:	Andi Kleen <ak@...e.de>
To:	linux-kernel@...r.kernel.org, mingo@...e.hu, tglx@...utronix.de,
	jbeulich@...ell.com, venkatesh.pallipadi@...el.com
Subject: [PATCH] [22/36] CPA: Reorder TLB / cache flushes to follow Intel recommendation


Intel recommends to first flush the TLBs and then the caches
on caching attribute changes. c_p_a() previously did it the
other way round. Reorder that.

The procedure is still not fully compliant to the Intel documentation
because Intel recommends a all CPU synchronization step between
the TLB flushes and the cache flushes.

However on all new Intel CPUs this is now meaningless anyways
because they support Self-Snoop and can skip the cache flush 
step anyways.

Signed-off-by: Andi Kleen <ak@...e.de>
Acked-by: Jan Beulich <jbeulich@...ell.com>

---
 arch/x86/mm/pageattr_32.c |   13 ++++++++++---
 arch/x86/mm/pageattr_64.c |   14 ++++++++++----
 2 files changed, 20 insertions(+), 7 deletions(-)

Index: linux/arch/x86/mm/pageattr_32.c
===================================================================
--- linux.orig/arch/x86/mm/pageattr_32.c
+++ linux/arch/x86/mm/pageattr_32.c
@@ -97,9 +97,6 @@ static void flush_kernel_map(void *arg)
 	struct flush_arg *a = (struct flush_arg *)arg;
 	struct flush *f;
 
-	if ((!cpu_has_clflush || a->full_flush) && boot_cpu_data.x86_model >= 4 &&
-		!cpu_has_ss)
-		wbinvd();
 	list_for_each_entry(f, &a->l, l) {
 		if (!a->full_flush && !cpu_has_ss)
 			clflush_cache_range((void *)f->addr, PAGE_SIZE);
@@ -112,6 +109,16 @@ static void flush_kernel_map(void *arg)
 	    (boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
 	     boot_cpu_data.x86 == 7))
 		__flush_tlb_all();
+
+	/*
+	 * RED-PEN: Intel documentation ask for a CPU synchronization step
+	 * here and in the loop. But it is moot on Self-Snoop CPUs anyways.
+	 */
+
+	if ((!cpu_has_clflush || a->full_flush) &&
+	    !cpu_has_ss && boot_cpu_data.x86_model >= 4)
+		wbinvd();
+
 }
 
 static void set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte) 
Index: linux/arch/x86/mm/pageattr_64.c
===================================================================
--- linux.orig/arch/x86/mm/pageattr_64.c
+++ linux/arch/x86/mm/pageattr_64.c
@@ -94,16 +94,22 @@ static void flush_kernel_map(void *arg)
 
 	/* When clflush is available always use it because it is
 	   much cheaper than WBINVD. */
-	if ((a->full_flush || !cpu_has_clflush) && !cpu_has_ss)
-		wbinvd();
 	list_for_each_entry(f, &a->l, l) {
-		if (!a->full_flush && !cpu_has_ss)
-			clflush_cache_range((void *)f->addr, PAGE_SIZE);
 		if (!a->full_flush)
 			__flush_tlb_one(f->addr);
+		if (!a->full_flush && !cpu_has_ss)
+			clflush_cache_range((void *)f->addr, PAGE_SIZE);
 	}
 	if (a->full_flush)
 		__flush_tlb_all();
+
+	/*
+	 * RED-PEN: Intel documentation ask for a CPU synchronization step
+	 * here and in the loop. But it is moot on Self-Snoop CPUs anyways.
+	 */
+
+	if ((!cpu_has_clflush || a->full_flush) && !cpu_has_ss)
+		wbinvd();
 }
 
 /* both protected by init_mm.mmap_sem */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ