[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20191029092423.17825-1-haokexin@gmail.com>
Date: Tue, 29 Oct 2019 17:24:23 +0800
From: Kevin Hao <haokexin@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH] dump_stack: Avoid the livelock of the dump_lock
In the current code, we uses the atomic_cmpxchg() to serialize the
output of the dump_stack(), but this implementation suffers the
thundering herd problem. We have observed such kind of livelock on a
Marvell cn96xx board(24 cpus) when heavily using the dump_stack() in
a kprobe handler. Actually we can use a spinlock here and leverage the
implementation of the spinlock(either ticket or queued spinlock) to
mediate such kind of livelock. Since the dump_stack() runs with the
irq disabled, so use the raw_spinlock_t to make it safe for rt kernel.
Signed-off-by: Kevin Hao <haokexin@...il.com>
---
lib/dump_stack.c | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/lib/dump_stack.c b/lib/dump_stack.c
index 5cff72f18c4a..fa971f75f1e2 100644
--- a/lib/dump_stack.c
+++ b/lib/dump_stack.c
@@ -83,37 +83,35 @@ static void __dump_stack(void)
* Architectures can override this implementation by implementing its own.
*/
#ifdef CONFIG_SMP
-static atomic_t dump_lock = ATOMIC_INIT(-1);
+static DEFINE_RAW_SPINLOCK(dump_lock);
+static int dump_cpu = -1;
asmlinkage __visible void dump_stack(void)
{
unsigned long flags;
int was_locked;
- int old;
int cpu;
/*
* Permit this cpu to perform nested stack dumps while serialising
* against other CPUs
*/
-retry:
local_irq_save(flags);
cpu = smp_processor_id();
- old = atomic_cmpxchg(&dump_lock, -1, cpu);
- if (old == -1) {
+
+ if (READ_ONCE(dump_cpu) != cpu) {
+ raw_spin_lock(&dump_lock);
+ dump_cpu = cpu;
was_locked = 0;
- } else if (old == cpu) {
+ } else
was_locked = 1;
- } else {
- local_irq_restore(flags);
- cpu_relax();
- goto retry;
- }
__dump_stack();
- if (!was_locked)
- atomic_set(&dump_lock, -1);
+ if (!was_locked) {
+ dump_cpu = -1;
+ raw_spin_unlock(&dump_lock);
+ }
local_irq_restore(flags);
}
--
2.14.4
Powered by blists - more mailing lists