[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d78f65a7-67bb-b5f3-007b-fca5a9f98a69@linux.ibm.com>
Date: Wed, 13 Jul 2022 12:56:21 +0200
From: Laurent Dufour <ldufour@...ux.ibm.com>
To: Randy Dunlap <rdunlap@...radead.org>, mpe@...erman.id.au,
npiggin@...il.com, christophe.leroy@...roup.eu,
wim@...ux-watchdog.org, linux@...ck-us.net, nathanl@...ux.ibm.com
Cc: haren@...ux.vnet.ibm.com, hch@...radead.org,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-watchdog@...r.kernel.org
Subject: Re: [PATCH v4 4/4] pseries/mobility: set NMI watchdog factor during
LPM
Le 12/07/2022 à 18:25, Randy Dunlap a écrit :
> Hi--
>
> On 7/12/22 07:32, Laurent Dufour wrote:
>> During a LPM, while the memory transfer is in progress on the arrival side,
>> some latencies is generated when accessing not yet transferred pages on the
>
> are
>
>> arrival side. Thus, the NMI watchdog may be triggered too frequently, which
>> increases the risk to hit a NMI interrupt in a bad place in the kernel,
>
> an NMI
>
>> leading to a kernel panic.
>>
>> Disabling the Hard Lockup Watchdog until the memory transfer could be a too
>> strong work around, some users would want this timeout to be eventually
>> triggered if the system is hanging even during LPM.
>>
>> Introduce a new sysctl variable nmi_watchdog_factor. It allows to apply
>> a factor to the NMI watchdog timeout during a LPM. Just before the CPU are
>
> an LPM. the CPU is
>
>> stopped for the switchover sequence, the NMI watchdog timer is set to
>> watchdog_tresh + factor%
>
> watchdog_thresh
>
>>
>> A value of 0 has no effect. The default value is 200, meaning that the NMI
>> watchdog is set to 30s during LPM (based on a 10s watchdog_tresh value).
>
> watchdog_thresh
>
>> Once the memory transfer is achieved, the factor is reset to 0.
>>
>> Setting this value to a high number is like disabling the NMI watchdog
>> during a LPM.
>
> an LPM.
>
>>
>> Reviewed-by: Nicholas Piggin <npiggin@...il.com>
>> Signed-off-by: Laurent Dufour <ldufour@...ux.ibm.com>
>> ---
>> Documentation/admin-guide/sysctl/kernel.rst | 12 ++++++
>> arch/powerpc/platforms/pseries/mobility.c | 43 +++++++++++++++++++++
>> 2 files changed, 55 insertions(+)
>>
>> diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
>> index ddccd1077462..0bb0b7f27e96 100644
>> --- a/Documentation/admin-guide/sysctl/kernel.rst
>> +++ b/Documentation/admin-guide/sysctl/kernel.rst
>> @@ -592,6 +592,18 @@ to the guest kernel command line (see
>> Documentation/admin-guide/kernel-parameters.rst).
>>
>
> This entire block should be in kernel-parameters.txt, not .rst,
> and it should be formatted like everything else in the .txt file.
Thanks for reviewing this patch.
I'll apply your requests in the next version.
However, regarding the change in kernel-parameters.txt, I'm confused. The
newly introduced parameter is only exposed through sysctl. Not as a kernel
boot option. In that case, should it be mentioned in kernel-parameters.txt?
Documentation/process/4.Coding.rst says:
The file :ref:`Documentation/admin-guide/kernel-parameters.rst
<kernelparameters>` describes all of the kernel's boot-time parameters.
Any patch which adds new parameters should add the appropriate entries to
this file.
And Documentation/process/submit-checklist.rst says:
16) All new kernel boot parameters are documented in
``Documentation/admin-guide/kernel-parameters.rst``.
What are the rules about editing .txt or .rst files?
>>
>> +nmi_watchdog_factor (PPC only)
>> +==================================
>> +
>> +Factor apply to to the NMI watchdog timeout (only when ``nmi_watchdog`` is
>
> Factor to apply to the NMI
>
>> +set to 1). This factor represents the percentage added to
>> +``watchdog_thresh`` when calculating the NMI watchdog timeout during a
>
> during an
>
>> +LPM. The soft lockup timeout is not impacted.
>> +
>> +A value of 0 means no change. The default value is 200 meaning the NMI
>> +watchdog is set to 30s (based on ``watchdog_thresh`` equal to 10).
>> +
>> +
>> numa_balancing
>> ==============
>>
>
>
Powered by blists - more mailing lists