[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <d5393b0e-a296-3296-d376-c9178669747b@I-love.SAKURA.ne.jp>
Date: Fri, 16 Sep 2022 23:15:45 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Shaokun Zhang <zhangshaokun@...ilicon.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Petr Mladek <pmladek@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Ben Dooks <ben.dooks@...ive.com>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
Luis Chamberlain <mcgrof@...nel.org>,
Xiaoming Ni <nixiaoming@...wei.com>,
John Ogness <john.ogness@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>
Subject: [PATCH (repost)] locking/lockdep: add debug_show_all_lock_holders()
Currently, check_hung_uninterruptible_tasks() reports details of locks
held in the system. Also, lockdep_print_held_locks() does not report
details of locks held by a thread if that thread is in TASK_RUNNING state.
Several years of experience of debugging without vmcore tells me that
these limitations have been a barrier for understanding what went wrong
in syzbot's "INFO: task hung in" reports.
I initially thought that the cause of "INFO: task hung in" reports is
due to over-stressing. But I understood that over-stressing is unlikely.
I now consider that there likely is a deadlock/livelock bug where lockdep
cannot report as a deadlock when "INFO: task hung in" is reported.
A typical case is that thread-1 is waiting for something to happen (e.g.
wait_event_*()) with a lock held. When thread-2 tries to hold that lock
using e.g. mutex_lock(), check_hung_uninterruptible_tasks() reports that
thread-2 is hung and thread-1 is holding a lock which thread-2 is trying
to hold. But currently check_hung_uninterruptible_tasks() cannot report
the exact location of thread-1 which gives us an important hint for
understanding why thread-1 is holding that lock for so long period.
When check_hung_uninterruptible_tasks() reports a thread waiting for a
lock, it is important to report backtrace of threads which already held
that lock. Therefore, allow check_hung_uninterruptible_tasks() to report
the exact location of threads which is holding any lock.
Signed-off-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
---
This is repost of https://lkml.kernel.org/r/82af40cc-bf85-2b53-b8f9-dfc12e66a781@I-love.SAKURA.ne.jp .
I think there was no critical objection which blocks this change.
I wish that lockdep continues tracking locks (i.e. debug_locks remains 1)
even after something went wrong, for recently I sometimes encounter problems
that disable lockdep during boot stage.
It would be noisy to report possibility of e.g. circular locking dependency
every time due to keeping debug_locks enabled. But tracking locks even after
something went wrong will help debug_show_all_lock_holders() to survive
problems during boot stage.
I'm not expecting lockdep to report the same problem forever.
Reporting possibility of each problem pattern (e.g. circular locking dependency)
up to once, by using cmpxchg() inside reporting functions that call printk(),
would be enough.
I'm expecting lockdep to continue working without calling printk() even after
one of problem patterns (e.g. circular locking dependency) was printk()ed, so that
debug_show_all_locks()/debug_show_all_lock_holders() can call printk() when needed.
Changing debug_locks behavior is a future patch. For now, this patch alone
will help debugging Greg's usb.git#usb-testing tree which is generating
many "INFO: task hung in" reports.
include/linux/debug_locks.h | 5 +++++
kernel/hung_task.c | 2 +-
kernel/locking/lockdep.c | 32 ++++++++++++++++++++++++++++++++
3 files changed, 38 insertions(+), 1 deletion(-)
diff --git a/include/linux/debug_locks.h b/include/linux/debug_locks.h
index dbb409d..0567d5c 100644
--- a/include/linux/debug_locks.h
+++ b/include/linux/debug_locks.h
@@ -50,6 +50,7 @@ static __always_inline int __debug_locks_off(void)
#ifdef CONFIG_LOCKDEP
extern void debug_show_all_locks(void);
extern void debug_show_held_locks(struct task_struct *task);
+extern void debug_show_all_lock_holders(void);
extern void debug_check_no_locks_freed(const void *from, unsigned long len);
extern void debug_check_no_locks_held(void);
#else
@@ -61,6 +62,10 @@ static inline void debug_show_held_locks(struct task_struct *task)
{
}
+static inline void debug_show_all_lock_holders(void)
+{
+}
+
static inline void
debug_check_no_locks_freed(const void *from, unsigned long len)
{
diff --git a/kernel/hung_task.c b/kernel/hung_task.c
index bb2354f..18e22bb 100644
--- a/kernel/hung_task.c
+++ b/kernel/hung_task.c
@@ -205,7 +205,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
unlock:
rcu_read_unlock();
if (hung_task_show_lock)
- debug_show_all_locks();
+ debug_show_all_lock_holders();
if (hung_task_show_all_bt) {
hung_task_show_all_bt = false;
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 64a13eb..d062541 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -55,6 +55,7 @@
#include <linux/rcupdate.h>
#include <linux/kprobes.h>
#include <linux/lockdep.h>
+#include <linux/sched/debug.h>
#include <asm/sections.h>
@@ -6509,6 +6510,37 @@ void debug_show_all_locks(void)
pr_warn("=============================================\n\n");
}
EXPORT_SYMBOL_GPL(debug_show_all_locks);
+
+void debug_show_all_lock_holders(void)
+{
+ struct task_struct *g, *p;
+
+ if (unlikely(!debug_locks)) {
+ pr_warn("INFO: lockdep is turned off.\n");
+ return;
+ }
+ pr_warn("\nShowing all threads with locks held in the system:\n");
+
+ rcu_read_lock();
+ for_each_process_thread(g, p) {
+ if (!p->lockdep_depth)
+ continue;
+ /*
+ * Assuming that the caller of this function is in a process
+ * context without any locks held, skip current thread which is
+ * holding only RCU read lock.
+ */
+ if (p == current)
+ continue;
+ sched_show_task(p);
+ lockdep_print_held_locks(p);
+ touch_nmi_watchdog();
+ touch_all_softlockup_watchdogs();
+ }
+ rcu_read_unlock();
+ pr_warn("\n");
+ pr_warn("=============================================\n\n");
+}
#endif
/*
--
1.8.3.1
Powered by blists - more mailing lists