[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190605140954.28471-1-pmladek@suse.com>
Date: Wed, 5 Jun 2019 16:09:51 +0200
From: Petr Mladek <pmladek@...e.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Cc: Laurence Oberman <loberman@...hat.com>,
Vincent Whitchurch <vincent.whitchurch@...s.com>,
Michal Hocko <mhocko@...e.com>, linux-kernel@...r.kernel.org,
Petr Mladek <pmladek@...e.com>
Subject: [RFC 0/3] watchdog/softlockup: Make softlockup reports more reliable and useful
Hi,
we were analyzing logs with several softlockup reports in flush_tlb_kernel_range().
They were confusing. Especially it was not clear whether it was deadlock,
livelock, or separate softlockups.
It went out that even a simple busy loop:
while (true)
cpu_relax();
is able to produce several softlockups reports:
[ 168.277520] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [cat:4865]
[ 196.277604] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [cat:4865]
[ 236.277522] watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [cat:4865]
I tried to understand the tricky watchdog code and produced two patches
that would be helpful to debug the original real bug:
1st patch prevents restart of the watchdog from unrelated locations.
2nd patch helps to distinguish several possible situations by
regular reports.
3rd patch can be used for testing the problem.
The watchdog code might deserve even more clean up. Anyway, I would
like to hear other's opinion first.
Petr Mladek (3):
watchdog/softlockup: Preserve original timestamp when touching
watchdog externally
watchdog/softlockup: Report the same softlockup regularly
Test softlockup
fs/proc/consoles.c | 5 ++++
fs/proc/version.c | 7 +++++
kernel/watchdog.c | 85 +++++++++++++++++++++++++++++++-----------------------
3 files changed, 61 insertions(+), 36 deletions(-)
--
2.16.4
Powered by blists - more mailing lists