lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 3 Jan 2013 00:25:33 -0500
From:	Rik van Riel <>
To:	Rik van Riel <>
	Jan Beulich <>,
	Thomas Gleixner <>,
Subject: [RFC PATCH 5/5] x86,smp: add debugging code to track spinlock delay

From: Eric Dumazet <>

This code prints out the maximum spinlock delay value and the
backtrace that pushed it that far.

On systems with serial consoles, the act of printing can cause
the spinlock delay value to explode. It can still be useful as
a debugging tool, but is probably too verbose to merge upstream
in this form.

Not-signed-off-by: Rik van Riel <>
Not-signed-off-by: Eric Dumazet <>
 arch/x86/kernel/smp.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 29360c4..a4401ed 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -148,6 +148,8 @@ static DEFINE_PER_CPU(struct delay_entry [1 << DELAY_HASH_SHIFT], spinlock_delay
+static DEFINE_PER_CPU(u16, maxdelay);
 void ticket_spin_lock_wait(arch_spinlock_t *lock, struct __raw_tickets inc)
 	__ticket_t head = inc.head, ticket = inc.tail;
@@ -195,6 +197,12 @@ void ticket_spin_lock_wait(arch_spinlock_t *lock, struct __raw_tickets inc)
 	ent->hash = hash;
 	ent->delay = delay;
+	if (__this_cpu_read(maxdelay) < delay) {
+		pr_err("cpu %d lock %p delay %d\n", smp_processor_id(), lock, delay);
+		__this_cpu_write(maxdelay, delay);
+		WARN_ON(1);
+	}

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists