lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZFErmshcrcikrSU1@alley>
Date:   Tue, 2 May 2023 17:26:18 +0200
From:   Petr Mladek <pmladek@...e.com>
To:     Douglas Anderson <dianders@...omium.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mark Rutland <mark.rutland@....com>,
        Randy Dunlap <rdunlap@...radead.org>,
        Will Deacon <will@...nel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Sumit Garg <sumit.garg@...aro.org>,
        Daniel Thompson <daniel.thompson@...aro.org>,
        Ian Rogers <irogers@...gle.com>, ravi.v.shankar@...el.com,
        Marc Zyngier <maz@...nel.org>,
        linux-perf-users@...r.kernel.org,
        Stephane Eranian <eranian@...gle.com>,
        kgdb-bugreport@...ts.sourceforge.net, ito-yuichi@...itsu.com,
        linux-arm-kernel@...ts.infradead.org,
        Stephen Boyd <swboyd@...omium.org>,
        Masayoshi Mizuma <msys.mizuma@...il.com>,
        ricardo.neri@...el.com, Lecopzer Chen <lecopzer.chen@...iatek.com>,
        Chen-Yu Tsai <wens@...e.org>, Andi Kleen <ak@...ux.intel.com>,
        Colin Cross <ccross@...roid.com>,
        Matthias Kaehlcke <mka@...omium.org>,
        Guenter Roeck <groeck@...omium.org>,
        Tzung-Bi Shih <tzungbi@...omium.org>,
        Alexander Potapenko <glider@...gle.com>,
        AngeloGioacchino Del Regno 
        <angelogioacchino.delregno@...labora.com>,
        Geert Uytterhoeven <geert+renesas@...der.be>,
        Juergen Gross <jgross@...e.com>,
        Kees Cook <keescook@...omium.org>,
        Laurent Dufour <ldufour@...ux.ibm.com>,
        Liam Howlett <liam.howlett@...cle.com>,
        Masahiro Yamada <masahiroy@...nel.org>,
        Matthias Brugger <matthias.bgg@...il.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Miguel Ojeda <ojeda@...nel.org>,
        Nathan Chancellor <nathan@...nel.org>,
        Nick Desaulniers <ndesaulniers@...gle.com>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        Sami Tolvanen <samitolvanen@...gle.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Zhaoyang Huang <zhaoyang.huang@...soc.com>,
        Zhen Lei <thunder.leizhen@...wei.com>,
        linux-kernel@...r.kernel.org, linux-mediatek@...ts.infradead.org
Subject: shared code: was: Re: [PATCH v3] hardlockup: detect hard lockups
 using secondary (buddy) CPUs

On Mon 2023-05-01 08:24:46, Douglas Anderson wrote:
> From: Colin Cross <ccross@...roid.com>
> 
> Implement a hardlockup detector that doesn't doesn't need any extra
> arch-specific support code to detect lockups. Instead of using
> something arch-specific we will use the buddy system, where each CPU
> watches out for another one. Specifically, each CPU will use its
> softlockup hrtimer to check that the next CPU is processing hrtimer
> interrupts by verifying that a counter is increasing.
> 
> --- a/include/linux/nmi.h
> +++ b/include/linux/nmi.h
> @@ -134,6 +144,7 @@ void lockup_detector_reconfigure(void);
>  static inline void touch_nmi_watchdog(void)
>  {
>  	arch_touch_nmi_watchdog();
> +	buddy_cpu_touch_watchdog();

	touch_buddy_watchdog();    ??? to follow the naming scheme?

>  	touch_softlockup_watchdog();
>  }
>  
> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -106,6 +108,13 @@ void __weak watchdog_nmi_disable(unsigned int cpu)
>  	hardlockup_detector_perf_disable();
>  }
>  
> +#else
> +
> +int __weak watchdog_nmi_enable(unsigned int cpu) { return 0; }
> +void __weak watchdog_nmi_disable(unsigned int cpu) { return; }

Honestly, the mix of softlockup and hardlockup code was a hard to
follow even before this patch. And it is going to be worse.

Anyway, the buddy watchdog is not using NMI at all. It should not
get enable using a function called *_nmi_enabled().

Also some comments are not longer valid, for example:

static void watchdog_enable(unsigned int cpu)
{
[...]
	/* Enable the perf event */
	if (watchdog_enabled & NMI_WATCHDOG_ENABLED)
		watchdog_nmi_enable(cpu);


I do not know. Maybe, fixing the mess is beyond any hope.
But we shold not make it worse.

I suggest to rename/shuffle at least functions touched
by this patchset to improve the meaning.

Sigh, it is hard to find a reasonable names. The code
already uses:

    + watchdog_*
    + watchdog_nmi_

    + softlockup_*

    + lockup_detector_*
    + hardlockup_detector_perf_*

and sysctl:

		.procname       = "watchdog",
		.procname	= "watchdog_thresh",
		.procname       = "nmi_watchdog",
		.procname	= "watchdog_cpumask",
		.procname       = "soft_watchdog",
		.procname	= "softlockup_panic",
		.procname	= "softlockup_all_cpu_backtrace",
		.procname	= "hardlockup_panic",
		.procname	= "hardlockup_all_cpu_backtrace",


So, I suggest, to use the names:


    + watchdog_*

	+ for the common infrastructure
	+ keep it in watchdog.c

    + hardlockup_detector_* or
      hardlockup_watchdog_* or
      watchdog_hld_*

	+ for the common hardlockup stuff.
	+ it t can stay in watchdog.c to keep shuffling bearable


    + hardlockup_detector_nmi_* or
      hardlockup_watchdog_nmi_* or
      watchdog_hld_nmi_* or
      watchdog_nmi_*

	+ for the arch specific hardlockup stuff that is
	  using NMI interrupts.

	+ it might either stay in watchdog_hld.c
	  or be moved to watchdog_nmi.c or
	  watchdog_hld_nmi.c

    + hardlockup_detector_buddy_* or
      hardlockup_watchdog_buddy_* or
      watchdog_hld_buddy_*
      watchdog_buddy_*

	+ for the arch specific hardlockup stuff that is
	  using buddy monitoring

	+ it might either be added to watchdog_hld.c
	  or be moved to watchdog_buddy.c or
	  watchdog_hld_buddy.c


Opinion:

     The buddy watchdog might actually be used also for
     softlockup detector. So, watchdog_buddy_* API
     and watchdog_buddy.c might make sense.


> +
> +#endif /* CONFIG_HARDLOCKUP_DETECTOR */
> +
>  /* Return 0, if a NMI watchdog is available. Error code otherwise */
>  int __weak __init watchdog_nmi_probe(void)
>  {
> @@ -364,6 +373,9 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
>  	/* kick the hardlockup detector */
>  	watchdog_interrupt_count();
>  
> +	/* test for hardlockups */
> +	watchdog_check_hardlockup();

  rename watchdog_buddy_check_hardlockup(); ???

> +
>  	/* kick the softlockup detector */
>  	if (completion_done(this_cpu_ptr(&softlockup_completion))) {
>  		reinit_completion(this_cpu_ptr(&softlockup_completion));
> --- /dev/null
> +++ b/kernel/watchdog_buddy_cpu.c
> @@ -0,0 +1,141 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/cpu.h>
> +#include <linux/cpumask.h>
> +#include <linux/kernel.h>
> +#include <linux/nmi.h>
> +#include <linux/percpu-defs.h>
> +
> +static DEFINE_PER_CPU(bool, watchdog_touch);
> +static DEFINE_PER_CPU(bool, hard_watchdog_warn);
> +static cpumask_t __read_mostly watchdog_cpus;
> +
> +static unsigned long hardlockup_allcpu_dumped;
> +
> +int __init watchdog_nmi_probe(void)
> +{
> +	return 0;
> +}

This is pretty strange. It shows that it was added a hacky way.

> +
> +notrace void buddy_cpu_touch_watchdog(void)
> +{
> +	/*
> +	 * Using __raw here because some code paths have
> +	 * preemption enabled.  If preemption is enabled
> +	 * then interrupts should be enabled too, in which
> +	 * case we shouldn't have to worry about the watchdog
> +	 * going off.
> +	 */
> +	raw_cpu_write(watchdog_touch, true);
> +}
> +EXPORT_SYMBOL_GPL(buddy_cpu_touch_watchdog);

Cut&pasted arch_touch_nmi_watchdog().

> +
> +static unsigned int watchdog_next_cpu(unsigned int cpu)
> +{
> +	cpumask_t cpus = watchdog_cpus;
> +	unsigned int next_cpu;
> +
> +	next_cpu = cpumask_next(cpu, &cpus);
> +	if (next_cpu >= nr_cpu_ids)
> +		next_cpu = cpumask_first(&cpus);
> +
> +	if (next_cpu == cpu)
> +		return nr_cpu_ids;
> +
> +	return next_cpu;
> +}
> +
[...]
> +static int is_hardlockup_buddy_cpu(unsigned int cpu)
> +{
> +	unsigned long hrint = per_cpu(hrtimer_interrupts, cpu);
> +
> +	if (per_cpu(hrtimer_interrupts_saved, cpu) == hrint)
> +		return 1;
> +
> +	per_cpu(hrtimer_interrupts_saved, cpu) = hrint;
> +	return 0;

This is cut&pasted is_hardlockup(). And the __this_cpu_* API
is replaced by per_cpu_* API.

> +}
> +
> +void watchdog_check_hardlockup(void)
> +{
> +	unsigned int next_cpu;
> +
> +	/*
> +	 * Test for hardlockups every 3 samples. The sample period is
> +	 *  watchdog_thresh * 2 / 5, so 3 samples gets us back to slightly over
> +	 *  watchdog_thresh (over by 20%).
> +	 */
> +	if (__this_cpu_read(hrtimer_interrupts) % 3 != 0)
> +		return;
> +
> +	/* check for a hardlockup on the next CPU */
> +	next_cpu = watchdog_next_cpu(smp_processor_id());
> +	if (next_cpu >= nr_cpu_ids)
> +		return;
> +
> +	/* Match with smp_wmb() in watchdog_nmi_enable() / watchdog_nmi_disable() */
> +	smp_rmb();
> +
> +	if (per_cpu(watchdog_touch, next_cpu) == true) {
> +		per_cpu(watchdog_touch, next_cpu) = false;
> +		return;
> +	}
> +
> +	if (is_hardlockup_buddy_cpu(next_cpu)) {
> +		/* only warn once */
> +		if (per_cpu(hard_watchdog_warn, next_cpu) == true)
> +			return;
> +
> +		/*
> +		 * Perform all-CPU dump only once to avoid multiple hardlockups
> +		 * generating interleaving traces
> +		 */
> +		if (sysctl_hardlockup_all_cpu_backtrace &&
> +				!test_and_set_bit(0, &hardlockup_allcpu_dumped))
> +			trigger_allbutself_cpu_backtrace();
> +
> +		if (hardlockup_panic)
> +			panic("Watchdog detected hard LOCKUP on cpu %u", next_cpu);
> +		else
> +			WARN(1, "Watchdog detected hard LOCKUP on cpu %u", next_cpu);
> +
> +		per_cpu(hard_watchdog_warn, next_cpu) = true;
> +	} else {
> +		per_cpu(hard_watchdog_warn, next_cpu) = false;

Also this cut&pastes a lots of code from watchdog_overflow_callback().

I wonder if we could somehow share the code between the two hardlockup
detectors. It would be win-win. It might help a lot with maintenance.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ