lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 29 May 2012 13:50:15 -0500
From:	Russ Anderson <rja@....com>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	linux-kernel@...r.kernel.org, x86@...nel.org,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	rja@...ricas.sgi.com
Subject: Re: [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems

On Thu, May 24, 2012 at 05:34:13PM +0200, Frederic Weisbecker wrote:
> On Thu, May 24, 2012 at 09:42:29AM -0500, Russ Anderson wrote:
> > When multiple cpus on a multi-processor system call dump_stack()
> > at the same time, the backtrace lines get intermixed, making 
> > the output worthless.  Add a lock so each cpu stack dump comes
> > out as a coherent set.
> > 
> > For example, when a multi-processor system is NMIed, all of the
> > cpus call dump_stack() at the same time, resulting in output for
> > all of cpus getting intermixed, making it impossible to tell what
> > any individual cpu was doing.  With this patch each cpu prints
> > its stack lines as a coherent set, so one can see what each cpu
> > was doing.
> > 
> > It has been tested on a 4069 cpu system.
> > 
> > Signed-off-by: Russ Anderson <rja@....com>
> 
> I don't think this is a good idea. What if an interrupt comes
> and calls this at the same time? Sure you can mask irqs but NMIs
> can call that too. In this case I prefer to have a messy report
> rather than a deadlock on the debug path.

Below is an updated patch with your recommnded changes.

> May be something like that:
> 
> static atomic_t dump_lock = ATOMIC_INIT(-1);
> 
> static void dump_stack(void)
> {
> 	int was_locked;
> 	int old;
> 	int cpu;
> 
> 	preempt_disable();
> retry:
> 	cpu = smp_processor_id();
> 	old = atomic_cmpxchg(&dump_lock, -1, cpu);
> 	if (old == -1) {
> 		was_locked = 0;
> 	} else if (old == cpu) {
> 		was_locked = 1;
> 	} else {
> 		cpu_relax();
> 		goto retry;
> 	}
> 
> 	__dump_trace();
> 
> 	if (!was_locked)
> 		atomic_set(&dump_lock, -1);
> 
> 	preempt_enable();
> }
> 
> You could also use a spinlock with irq disabled and test in_nmi()
> but we could have a dump_trace() in an NMI before the nmi count is
> incremented. So the above is perhaps more robust.
> --
---

When multiple cpus on a multi-processor system call dump_stack()
at the same time, the backtrace lines get intermixed, making 
the output worthless.  Add a lock so each cpu stack dump comes
out as a coherent set.

For example, when a multi-processor system is NMIed, all of the
cpus call dump_stack() at the same time, resulting in output for
all of cpus getting intermixed, making it impossible to tell what
any individual cpu was doing.  With this patch each cpu prints
its stack lines as a coherent set, so one can see what each cpu
was doing.

It has been tested on a 4069 cpu system.

Signed-off-by: Russ Anderson <rja@....com>

---
 arch/x86/kernel/dumpstack.c |   28 +++++++++++++++++++++++++++-
 1 file changed, 27 insertions(+), 1 deletion(-)

Index: linux/arch/x86/kernel/dumpstack.c
===================================================================
--- linux.orig/arch/x86/kernel/dumpstack.c	2012-05-24 10:05:36.477576977 -0500
+++ linux/arch/x86/kernel/dumpstack.c	2012-05-27 16:58:26.527212233 -0500
@@ -182,7 +182,7 @@ void show_stack(struct task_struct *task
 /*
  * The architecture-independent dump_stack generator
  */
-void dump_stack(void)
+void __dump_stack(void)
 {
 	unsigned long bp;
 	unsigned long stack;
@@ -195,6 +195,32 @@ void dump_stack(void)
 		init_utsname()->version);
 	show_trace(NULL, NULL, &stack, bp);
 }
+
+static atomic_t dump_lock = ATOMIC_INIT(-1);
+
+void dump_stack(void)
+{
+	int was_locked, old, cpu;
+
+	preempt_disable();
+retry:
+	cpu = smp_processor_id();
+	old = atomic_cmpxchg(&dump_lock, -1, cpu);
+	if (old == -1) {
+		was_locked = 0;
+	} else if (old == cpu) {
+		was_locked = 1;
+	} else {
+		cpu_relax();
+		goto retry;
+	}
+
+	__dump_stack();
+
+	if (!was_locked)
+		atomic_set(&dump_lock, -1);
+	preempt_enable();
+}
 EXPORT_SYMBOL(dump_stack);
 
 static arch_spinlock_t die_lock = __ARCH_SPIN_LOCK_UNLOCKED;

-- 
Russ Anderson, OS RAS/Partitioning Project Lead  
SGI - Silicon Graphics Inc          rja@....com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ