[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sgkgw3xq.fsf@nanos.tec.linutronix.de>
Date: Thu, 16 Jan 2020 03:06:09 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Waiman Long <longman@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>
Cc: linux-kernel@...r.kernel.org, Mike Rapoport <rppt@...ux.ibm.com>,
Kees Cook <keescook@...omium.org>,
Waiman Long <longman@...hat.com>
Subject: Re: [PATCH v2] watchdog: Fix possible soft lockup warning at bootup
Waiman Long <longman@...hat.com> writes:
> [ 25.496379] watchdog: BUG: soft lockup - CPU#14 stuck for 22s! [swapper/14:0]
>
> Further analysis of the situation revealed that the smp_init() call
> itself took more than 20s for that 2-socket 56-core and 224-thread
> server.
>
> [ 0.115632] CPU1: Booted secondary processor 0x0000000100 [0x431f0af1]
> :
> [ 27.177282] CPU223: Booted secondary processor 0x0000011b03 [0x431f0af1]
So what?
> By adding some instrumentation code, it was found that for cpu 14,
> watchdog_enable() was called early with a timestamp of 1. That activates
> the watchdog time checking logic. It was also found that the monotonic
> time measured during the smp_init() phase runs much slower than the
> real elapsed time as shown by the below debug printf output:
>
> [ 1.138522] run_queues, watchdog_timer_fn: now = 170000000
> [ 25.519391] run_queues, watchdog_timer_fn: now = 4170000000
>
> In this particular case, it took about 24.4s of elapsed time for the
> clock to advance 4s which is the soft expiration time that is required
> to trigger the calling of watchdog_timer_fn(). That clock slowdown
> stopped once the smp_init() call was done and the clock time ran at
> the same rate as the elapsed time afterward.
So the consequence is that you defer the soft lockup detector
initialization instead of figuring out WHY this time difference
happens. There is no clock slowdown. The clock keeps ticking at the same
frequency so there must be something else which causes this deviation.
> On a comparable CN9980 system from HPE, there was also a bit of clock
> slowdown but not as much as the Cavium system:
>
> [ 1.177068] run_queues, watchdog_timer_fn: now = 1010000000
> [ 5.577925] run_queues, watchdog_timer_fn: now = 5010000000
>
> Similar clock slowdown was not obseved on x86-64 systems.
>
> The clock slowdown in arm64 systems seems to be hardware specific.
Maybe, but that's not a reasonable explanation.
> Given the fact that the clock is less reliable during the smp_init()
Which fact? An observation is not a fact.
> call and that call can take a rather long time, one simple workaround
> to avoid this soft lockup problem is to move lockup_detector_init()
> call after smp_init() when the clock is reliable. This does fix the
> soft lockup problem for that Cavium system and does not seem to affect
> the operation of the watchdog. This is much easier than trying to fix
> the clock slowdown problem in smp_init() as the watchdog timer is the
> only timer function that can be active at such early boot stage anyway.
Sure, it's much easier to weaken debug features instead of actually
analyzing why there is a deviation between clocks which should not have
any deviation in the first place. Root cause analysis is overrrated and
just a pet pieve of grumpy old greybeards, right?
Andrew, please drop this hackery.
Thanks,
tglx
Powered by blists - more mailing lists