[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231220131534.2.Ia5906525d440d8e8383cde31b7c61c2aadc8f907@changeid>
Date: Wed, 20 Dec 2023 13:15:35 -0800
From: Douglas Anderson <dianders@...omium.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Petr Mladek <pmladek@...e.com>,
Li Zhe <lizhe.67@...edance.com>,
Pingfan Liu <kernelfans@...il.com>,
John Ogness <john.ogness@...utronix.de>,
Lecopzer Chen <lecopzer.chen@...iatek.com>,
Douglas Anderson <dianders@...omium.org>,
linux-kernel@...r.kernel.org
Subject: [PATCH 2/4] watchdog/softlockup: Use printk_cpu_sync_get_irqsave() to serialize reporting
Instead of introducing a spinlock, use printk_cpu_sync_get_irqsave()
and printk_cpu_sync_put_irqrestore() to serialize softlockup
reporting. Alone this doesn't have any real advantage over the
spinlock, but this will allow us to use the same function in a future
change to also serialize hardlockup crawls.
NOTE: for the most part this serialization is important because we
often end up in the show_regs() path and that has no built-in
serialization if there are multiple callers at once. However, even in
the case where we end up in the dump_stack() path this still has some
advantages because the stack will be guaranteed to be together in the
logs with the lockup message with no interleaving.
NOTE: the fact that printk_cpu_sync_get_irqsave() is allowed to be
called multiple times on the same CPU is important here. Specifically
we hold the "lock" while calling dump_stack() which also gets the same
"lock". This is explicitly documented to be OK and means we don't need
to introduce a variant of dump_stack() that doesn't grab the lock.
Signed-off-by: Douglas Anderson <dianders@...omium.org>
---
kernel/watchdog.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index b4fd2f12137f..526041a1100a 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -454,7 +454,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
struct pt_regs *regs = get_irq_regs();
int duration;
int softlockup_all_cpu_backtrace = sysctl_softlockup_all_cpu_backtrace;
- static DEFINE_SPINLOCK(watchdog_output_lock);
+ unsigned long flags;
if (!watchdog_enabled)
return HRTIMER_NORESTART;
@@ -521,7 +521,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
/* Start period for the next softlockup warning. */
update_report_ts();
- spin_lock(&watchdog_output_lock);
+ printk_cpu_sync_get_irqsave(flags);
pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
smp_processor_id(), duration,
current->comm, task_pid_nr(current));
@@ -531,7 +531,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
show_regs(regs);
else
dump_stack();
- spin_unlock(&watchdog_output_lock);
+ printk_cpu_sync_put_irqrestore(flags);
if (softlockup_all_cpu_backtrace) {
trigger_allbutcpu_cpu_backtrace(smp_processor_id());
--
2.43.0.472.g3155946c3a-goog
Powered by blists - more mailing lists