[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130206150833.34d08bd3@cuia.bos.redhat.com>
Date: Wed, 6 Feb 2013 15:08:33 -0500
From: Rik van Riel <riel@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: aquini@...hat.com, eric.dumazet@...il.com, lwoodman@...hat.com,
knoel@...hat.com, chegu_vinod@...com,
raghavendra.kt@...ux.vnet.ibm.com, mingo@...hat.com
Subject: [PATCH -v5 6/5] x86,smp: add debugging code to track spinlock delay
value
Subject: x86,smp: add debugging code to track spinlock delay value
From: Eric Dumazet <eric.dumazet@...il.com>
This code prints out the maximum spinlock delay value and the
backtrace that pushed it that far.
On systems with serial consoles, the act of printing can cause
the spinlock delay value to explode. It can still be useful as
a debugging tool, but is probably too verbose to merge upstream
in this form.
Not-signed-off-by: Rik van Riel <riel@...hat.com>
Not-signed-off-by: Eric Dumazet <eric.dumazet@...il.com>
---
arch/x86/kernel/smp.c | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index fbc5ff3..660f0ec 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -146,6 +146,8 @@ static DEFINE_PER_CPU(struct delay_entry [1 << DELAY_HASH_SHIFT], spinlock_delay
},
};
+static DEFINE_PER_CPU(u32, maxdelay);
+
/*
* Wait on a congested ticket spinlock. Many spinlocks are embedded in
* data structures; having many CPUs pounce on the cache line with the
@@ -209,6 +211,12 @@ void ticket_spin_lock_wait(arch_spinlock_t *lock, struct __raw_tickets inc)
}
ent->hash = hash;
ent->delay = delay;
+
+ if (__this_cpu_read(maxdelay) * 4 < delay * 3) {
+ pr_err("cpu %d lock %p delay %d\n", smp_processor_id(), lock, delay>>DELAY_SHIFT);
+ __this_cpu_write(maxdelay, delay);
+ WARN_ON(1);
+ }
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists