lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 18 Feb 2009 15:55:14 -0800
From:	Suresh Siddha <suresh.b.siddha@...el.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Nick Piggin <npiggin@...e.de>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Oleg Nesterov <oleg@...hat.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Steven Rostedt <rostedt@...dmis.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>
Subject: Re: smp.c && barriers (Was: [PATCH 1/4] generic-smp: remove single
	ipi fallback for smp_call_function_many())

On Wed, 2009-02-18 at 11:17 -0800, Ingo Molnar wrote: 
> * Suresh Siddha <suresh.b.siddha@...el.com> wrote:
> 
> > > Indeed that could cause problems on some architectures which I
> > > had hoped to avoid. So the patch is probably better off to first
> > > add the smp_mb() to arch_send_call_function_xxx arch code, unless
> > > it is immediately obvious or confirmed by arch maintainer that
> > > such barrier is not required.
> > 
> > For x2apic specific operations we should add the smp_mb() sequence. But
> > we need to make sure that we don't end up doing it twice (once in
> > generic code and another in arch code) for all the ipi paths.
> 
> right now we do have an smp_mb() due to your fix in November.
> 
> So what should happen is to move that smp_mb() from the x86 
> generic IPI path to the x86 x2apic IPI path. (and turn it into 
> an smp_wmb() - that should be enough - we dont care about future 
> reads being done sooner than this point.)

Ingo, smp_wmb() won't help. x2apic register writes can still go ahead of
the sfence. According to the SDM, we need a serializing instruction or
mfence. Our internal experiments also proved this.

Appended is the x86 portion of the patch:
---

From: Suresh Siddha <suresh.b.siddha@...el.com>
Subject: x86: move smp_mb() in x86 flush tlb path to x2apic specific IPI
paths

uncached MMIO accesses for xapic are inherently serializing and hence
we don't need explicit barriers for xapic IPI paths.

x2apic MSR writes/reads don't have serializing semantics and hence need
a serializing instruction or mfence, to make all the previous memory
stores
globally visisble before the x2apic msr write for IPI.

And hence move smp_mb() in x86 flush tlb path to x2apic specific paths.

Signed-off-by: Suresh Siddha <suresh.b.siddha@...el.com>
---

diff --git a/arch/x86/kernel/genx2apic_cluster.c
b/arch/x86/kernel/genx2apic_cluster.c
index 7c87156..b237248 100644
--- a/arch/x86/kernel/genx2apic_cluster.c
+++ b/arch/x86/kernel/genx2apic_cluster.c
@@ -60,6 +60,13 @@ static void x2apic_send_IPI_mask(const struct cpumask
*mask, int vector)
 	unsigned long query_cpu;
 	unsigned long flags;
 
+	/*
+	 * Make previous memory operations globally visible before
+	 * sending the IPI. We need a serializing instruction or mfence
+	 * for this.
+	 */
+	smp_mb();
+
 	local_irq_save(flags);
 	for_each_cpu(query_cpu, mask) {
 		__x2apic_send_IPI_dest(
@@ -76,6 +83,13 @@ static void
 	unsigned long query_cpu;
 	unsigned long flags;
 
+	/*
+	 * Make previous memory operations globally visible before
+	 * sending the IPI. We need a serializing instruction or mfence
+	 * for this.
+	 */
+	smp_mb();
+
 	local_irq_save(flags);
 	for_each_cpu(query_cpu, mask) {
 		if (query_cpu == this_cpu)
@@ -93,6 +107,13 @@ static void x2apic_send_IPI_allbutself(int vector)
 	unsigned long query_cpu;
 	unsigned long flags;
 
+	/*
+	 * Make previous memory operations globally visible before
+	 * sending the IPI. We need a serializing instruction or mfence
+	 * for this.
+	 */
+	smp_mb();
+
 	local_irq_save(flags);
 	for_each_online_cpu(query_cpu) {
 		if (query_cpu == this_cpu)
diff --git a/arch/x86/kernel/genx2apic_phys.c
b/arch/x86/kernel/genx2apic_phys.c
index 5cbae8a..f48f282 100644
--- a/arch/x86/kernel/genx2apic_phys.c
+++ b/arch/x86/kernel/genx2apic_phys.c
@@ -58,6 +58,13 @@ static void x2apic_send_IPI_mask(const struct cpumask
*mask, int vector)
 	unsigned long query_cpu;
 	unsigned long flags;
 
+	/*
+	 * Make previous memory operations globally visible before
+	 * sending the IPI. We need a serializing instruction or mfence
+	 * for this.
+	 */
+	smp_mb();
+
 	local_irq_save(flags);
 	for_each_cpu(query_cpu, mask) {
 		__x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu),
@@ -73,6 +80,13 @@ static void
 	unsigned long query_cpu;
 	unsigned long flags;
 
+	/*
+	 * Make previous memory operations globally visible before
+	 * sending the IPI. We need a serializing instruction or mfence
+	 * for this.
+	 */
+	smp_mb();
+
 	local_irq_save(flags);
 	for_each_cpu(query_cpu, mask) {
 		if (query_cpu != this_cpu)
@@ -89,6 +103,13 @@ static void x2apic_send_IPI_allbutself(int vector)
 	unsigned long query_cpu;
 	unsigned long flags;
 
+	/*
+	 * Make previous memory operations globally visible before
+	 * sending the IPI. We need a serializing instruction or mfence
+	 * for this.
+	 */
+	smp_mb();
+
 	local_irq_save(flags);
 	for_each_online_cpu(query_cpu) {
 		if (query_cpu == this_cpu)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 14c5af4..de14557 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -188,11 +188,6 @@ static void flush_tlb_others_ipi(const struct
cpumask *cpumask,
 		       cpumask, cpumask_of(smp_processor_id()));
 
 	/*
-	 * Make the above memory operations globally visible before
-	 * sending the IPI.
-	 */
-	smp_mb();
-	/*
 	 * We have to send the IPI only to
 	 * CPUs affected.
 	 */


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ