[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191021124214.GD1817@hirez.programming.kicks-ass.net>
Date: Mon, 21 Oct 2019 14:42:14 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Petr Mladek <pmladek@...e.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Laurence Oberman <loberman@...hat.com>,
Vincent Whitchurch <vincent.whitchurch@...s.com>,
Michal Hocko <mhocko@...e.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] watchdog/softlockup: Preserve original timestamp
when touching watchdog externally
On Mon, Aug 19, 2019 at 12:47:30PM +0200, Petr Mladek wrote:
> Some bug report included the same softlockups in flush_tlb_kernel_range()
> in regular intervals. Unfortunately was not clear if there was a progress
> or not.
>
> The situation can be simulated with a simply busy loop:
>
> while (true)
> cpu_relax();
>
> The softlockup detector produces:
>
> [ 168.277520] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [cat:4865]
> [ 196.277604] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [cat:4865]
> [ 236.277522] watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [cat:4865]
>
> One would expect only one softlockup report or several reports with
> an increased duration.
Let's just say our expectations differ.
> The result is that each softlockup is reported only once unless
> another process get scheduled:
>
> [ 320.248948] watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [cat:4916]
Which would greatly confuse me; as the above would have me think the
situation got resolved (no more lockups reported) even though it is
still very much stuck there.
IOW, I don't see how this makes anything better. You're removing
information.
Powered by blists - more mailing lists