[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191021130425.ewiegm2425hkydb3@pathway.suse.cz>
Date: Mon, 21 Oct 2019 15:04:25 +0200
From: Petr Mladek <pmladek@...e.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Laurence Oberman <loberman@...hat.com>,
Vincent Whitchurch <vincent.whitchurch@...s.com>,
Michal Hocko <mhocko@...e.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] watchdog/softlockup: Preserve original timestamp
when touching watchdog externally
On Mon 2019-10-21 14:42:14, Peter Zijlstra wrote:
> On Mon, Aug 19, 2019 at 12:47:30PM +0200, Petr Mladek wrote:
> > Some bug report included the same softlockups in flush_tlb_kernel_range()
> > in regular intervals. Unfortunately was not clear if there was a progress
> > or not.
> >
> > The situation can be simulated with a simply busy loop:
> >
> > while (true)
> > cpu_relax();
> >
> > The softlockup detector produces:
> >
> > [ 168.277520] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [cat:4865]
> > [ 196.277604] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [cat:4865]
> > [ 236.277522] watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [cat:4865]
> >
> > One would expect only one softlockup report or several reports with
> > an increased duration.
>
> Let's just say our expectations differ.
>
> > The result is that each softlockup is reported only once unless
> > another process get scheduled:
> >
> > [ 320.248948] watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [cat:4916]
>
> Which would greatly confuse me; as the above would have me think the
> situation got resolved (no more lockups reported) even though it is
> still very much stuck there.
>
> IOW, I don't see how this makes anything better. You're removing
> information.
The 2nd patch brings back the regular report but with correctly
counted time (stuck for XXs).
I split it into two patches because I was not sure what would be
preferred behavior. I prefer the regular reports as well.
Best Regards,
Petr
Powered by blists - more mailing lists