lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070208203210.GB9798@osiris.ibm.com>
Date:	Thu, 8 Feb 2007 21:32:10 +0100
From:	Heiko Carstens <heiko.carstens@...ibm.com>
To:	Andrew Morton <akpm@...l.org>, Ingo Molnar <mingo@...e.hu>,
	Andi Kleen <ak@...e.de>, Jan Glauber <jan.glauber@...ibm.com>,
	Martin Schwidefsky <schwidefsky@...ibm.com>
Cc:	linux-kernel@...r.kernel.org
Subject: [patch] i386/x86_64: smp_call_function locking inconsistency

On i386/x86_64 smp_call_function_single() takes call_lock with
spin_lock_bh(). To me this would imply that it is legal to call
smp_call_function_single() from softirq context.
It's not since smp_call_function() takes call_lock with just
spin_lock(). We can easily deadlock:

-> [process context]
-> smp_call_function()
-> spin_lock(&call_lock)
-> IRQ -> do_softirq -> tasklet
-> [softirq context]
-> smp_call_function_single()
-> spin_lock_bh(&call_lock)
-> dead

So either all spin_lock_bh's should be converted to spin_lock,
which would limit smp_call_function()/smp_call_function_single()
to process context & irqs enabled.
Or the spin_lock's could be converted to spin_lock_bh which would
make it possible to call these two functions even if in softirq
context. AFAICS this should be safe.

Just stumbled across this since we have the same inconsistency
on s390 and our new iucv driver makes use of smp_call_function
in softirq context.

The patch below converts the spin_lock's in i386/x86_64 to
spin_lock_bh, so it would be consistent with s390.

Patch is _not_ compile tested.

Cc: Andi Kleen <ak@...e.de>
Cc: Ingo Molnar <mingo@...e.hu>
Signed-off-by: Heiko Carstens <heiko.carstens@...ibm.com>
---
 arch/i386/kernel/smp.c   |    8 ++++----
 arch/x86_64/kernel/smp.c |   10 +++++-----
 2 files changed, 9 insertions(+), 9 deletions(-)

Index: linux-2.6/arch/i386/kernel/smp.c
===================================================================
--- linux-2.6.orig/arch/i386/kernel/smp.c
+++ linux-2.6/arch/i386/kernel/smp.c
@@ -527,7 +527,7 @@ static struct call_data_struct *call_dat
  * remote CPUs are nearly ready to execute <<func>> or are or have executed.
  *
  * You must not call this function with disabled interrupts or from a
- * hardware interrupt handler or from a bottom half handler.
+ * hardware interrupt handler.
  */
 int smp_call_function (void (*func) (void *info), void *info, int nonatomic,
 			int wait)
@@ -536,10 +536,10 @@ int smp_call_function (void (*func) (voi
 	int cpus;
 
 	/* Holding any lock stops cpus from going down. */
-	spin_lock(&call_lock);
+	spin_lock_bh(&call_lock);
 	cpus = num_online_cpus() - 1;
 	if (!cpus) {
-		spin_unlock(&call_lock);
+		spin_unlock_bh(&call_lock);
 		return 0;
 	}
 
@@ -566,7 +566,7 @@ int smp_call_function (void (*func) (voi
 	if (wait)
 		while (atomic_read(&data.finished) != cpus)
 			cpu_relax();
-	spin_unlock(&call_lock);
+	spin_unlock_bh(&call_lock);
 
 	return 0;
 }
Index: linux-2.6/arch/x86_64/kernel/smp.c
===================================================================
--- linux-2.6.orig/arch/x86_64/kernel/smp.c
+++ linux-2.6/arch/x86_64/kernel/smp.c
@@ -439,15 +439,15 @@ static void __smp_call_function (void (*
  * remote CPUs are nearly ready to execute func or are or have executed.
  *
  * You must not call this function with disabled interrupts or from a
- * hardware interrupt handler or from a bottom half handler.
+ * hardware interrupt handler.
  * Actually there are a few legal cases, like panic.
  */
 int smp_call_function (void (*func) (void *info), void *info, int nonatomic,
 			int wait)
 {
-	spin_lock(&call_lock);
+	spin_lock_bh(&call_lock);
 	__smp_call_function(func,info,nonatomic,wait);
-	spin_unlock(&call_lock);
+	spin_unlock_bh(&call_lock);
 	return 0;
 }
 EXPORT_SYMBOL(smp_call_function);
@@ -477,13 +477,13 @@ void smp_send_stop(void)
 	if (reboot_force)
 		return;
 	/* Don't deadlock on the call lock in panic */
-	if (!spin_trylock(&call_lock)) {
+	if (!spin_trylock_bh(&call_lock)) {
 		/* ignore locking because we have panicked anyways */
 		nolock = 1;
 	}
 	__smp_call_function(smp_really_stop_cpu, NULL, 0, 0);
 	if (!nolock)
-		spin_unlock(&call_lock);
+		spin_unlock_bh(&call_lock);
 
 	local_irq_disable();
 	disable_local_APIC();
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ