[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <666cedea-2dbc-254e-467b-c02a3a2d8795@linux.ibm.com>
Date: Fri, 3 Jun 2022 10:59:51 +0200
From: Laurent Dufour <ldufour@...ux.ibm.com>
To: Nathan Lynch <nathanl@...ux.ibm.com>
Cc: linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
mpe@...erman.id.au, benh@...nel.crashing.org, paulus@...ba.org,
haren@...ux.vnet.ibm.com, npiggin@...il.com
Subject: Re: [PATCH 0/2] Disabling NMI watchdog during LPM's memory transfer
On 02/06/2022, 19:58:31, Nathan Lynch wrote:
> Laurent Dufour <ldufour@...ux.ibm.com> writes:
>> When a partition is transferred, once it arrives at the destination node,
>> the partition is active but much of its memory must be transferred from the
>> start node.
>>
>> It depends on the activity in the partition, but the more CPU the partition
>> has, the more memory to be transferred is likely to be. This causes latency
>> when accessing pages that need to be transferred, and often, for large
>> partitions, it triggers the NMI watchdog.
>
> It also triggers warnings from other watchdogs and subsystems that
> have soft latency requirements - softlockup, RCU, workqueue. The issue
> is more general than the NMI watchdog.
I agree, but, as you can read in the title, this series is focusing on the
NMI watchdog which may have some dangerous side effects.
>> The NMI watchdog causes the CPU stack to dump where it appears to be
>> stuck. In this case, it does not bring much information since it can happen
>> during any memory access of the kernel.
>
> When the site of a watchdog backtrace shows a thread stuck on a routine
> memory access as opposed to something like a lock acquisition, that is
> actually useful information that shouldn't be discarded. It tells us the
> platform is failing to adequately virtualize partition memory. This
> isn't a benign situation and it's likely to unacceptably affect real
> workloads. The kernel is ideally situated to detect and warn about this.
>
I agree, but the information provided are most of the time misleading,
pointing to various part in the kernel where the last page fault of a
series generated by the kernel happened. There is no real added value,
since this is well known that the memory transfer is introducing latency
that is detected by the kernel. Furthermore, soft lockups are still
triggered and report as well this latency without any side effect.
>> In addition, the NMI interrupt mechanism is not secure and can generate a
>> dump system in the event that the interruption is taken while
>> MSR[RI]=0.
>
> This sounds like a general problem with that facility that isn't
> specific to partition migration? Maybe it should be disabled altogether
> until that can be fixed?
We already discuss that with Nick and it sounds that it is not so easy to
fix that. Furthermore, the NMI watchdog is considered as last option for
analyzing a potential dying system. So taking the risk of generating a
crash because of the NMI interrupt looks acceptable. But disabling it
totally because of that is not the right option.
In the LPM's case, the system is dependent on the LPM's latency, it is not
really dying or in a really bad shape, so that risk is too expansive.
Fixing the latency at the source is definitively the best option, and the
PHYP team is already investigating that. But, in the meantime, there is a
way to prevent the system to die because of that side effect by disabling
the NMI watchdog during the memory transfer.
>
>> Given how often hard lockups are detected when transferring large
>> partitions, it seems best to disable the watchdog NMI until the memory
>> transfer from the start node is complete.
>
> At this time, I'm far from convinced. Disabling the watchdog is going to
> make the underlying problems in the platform and/or network harder to
> understand.
I was also reluctant, and would like the NMI watchdog to remain active
during LPM. But there is currently no other way to work around the LPM's
latency, and its potential risk of system crash.
I've spent a lot of time analyzing many crashes happening during LPM and
all of them are now pointing to the NMI watchdog issue. Furthermore, on a
system with thousands of CPUs, I saw a system crash because a CPU was not
able to respond in time (1s) to the NMI interrupt and thus generate the panic.
In addition, we now know that a RTAS call, made right after the system is
running again on the arrival side, is taking ages and is most of the time
triggering the NMI watchdog.
There are ongoing investigations to clarify where and how this latency is
happening. I'm not excluding any other issue in the Linux kernel, but right
now, this looks to be the best option to prevent system crash during LPM.
I'm hoping that the PHYP team will be able to improve that latency. At that
time, this commit can be reverted, but until then, I don't see how we can
do without that workaround.
Laurent.
Powered by blists - more mailing lists