lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 May 2017 11:20:27 -0700
From:   Stephane Eranian <eranian@...gle.com>
To:     "Liang, Kan" <kan.liang@...el.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        "mingo@...hat.com" <mingo@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Arnaldo Carvalho de Melo <acme@...hat.com>,
        Jiri Olsa <jolsa@...hat.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Vince Weaver <vincent.weaver@...ne.edu>,
        "ak@...ux.intel.com" <ak@...ux.intel.com>
Subject: Re: [PATCH 2/2] perf/x86/intel, watchdog: Switch NMI watchdog to ref
 cycles on x86

Andi,

On Fri, May 19, 2017 at 10:06 AM,  <kan.liang@...el.com> wrote:
> From: Kan Liang <Kan.liang@...el.com>
>
> The NMI watchdog uses either the fixed cycles or a generic cycles
> counter. This causes a lot of conflicts with users of the PMU who want
> to run a full group including the cycles fixed counter, for example the
> --topdown support recently added to perf stat. The code needs to fall
> back to not use groups, which can cause measurement inaccuracy due to
> multiplexing errors.
>
> This patch switches the NMI watchdog to use reference cycles on Intel
> systems. This is actually more accurate than cycles, because cycles can
> tick faster than the measured CPU Frequency due to Turbo mode.
>
You have not addressed why you need that accuracy?
This is about detecting hard deadlocks, so you don't care about a few seconds
accuracy. Instead of introducing all this complexity, why not simply extend the
period of the watchdog to be more tolerant to Turbo scaling t o avoid
false positive
and continue to use core-cycles, an event universally available.


> The ref cycles always tick at their frequency, or slower when the system
> is idling. That means the NMI watchdog can never expire too early,
> unlike with cycles.
>
Just make the period longer, like 30% longer. Take the max turbo factor you can
get and use that. It is okay if it takes longer of machine with
smaller max Turbo ratios.

What is the problem with this approach instead?

> The reference cycles tick roughly at the frequency of the TSC, so the
> same period computation can be used.
>
> Signed-off-by: Andi Kleen <ak@...ux.intel.com>
> ---
>
> This patch was once merged, but reverted later.
> Because ref-cycles can not be used anymore when watchdog is enabled.
> The commit is 44530d588e142a96cf0cd345a7cb8911c4f88720
>
> The patch 1/2 has extended the ref-cycles to GP counter. The concern
> should be gone.
>
> Rebased the patch and repost.
>
>
>  arch/x86/kernel/apic/hw_nmi.c | 8 ++++++++
>  include/linux/nmi.h           | 1 +
>  kernel/watchdog_hld.c         | 7 +++++++
>  3 files changed, 16 insertions(+)
>
> diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
> index c73c9fb..acd21dc 100644
> --- a/arch/x86/kernel/apic/hw_nmi.c
> +++ b/arch/x86/kernel/apic/hw_nmi.c
> @@ -18,8 +18,16 @@
>  #include <linux/nmi.h>
>  #include <linux/init.h>
>  #include <linux/delay.h>
> +#include <linux/perf_event.h>
>
>  #ifdef CONFIG_HARDLOCKUP_DETECTOR
> +int hw_nmi_get_event(void)
> +{
> +       if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
> +               return PERF_COUNT_HW_REF_CPU_CYCLES;
> +       return PERF_COUNT_HW_CPU_CYCLES;
> +}
> +
>  u64 hw_nmi_get_sample_period(int watchdog_thresh)
>  {
>         return (u64)(cpu_khz) * 1000 * watchdog_thresh;
> diff --git a/include/linux/nmi.h b/include/linux/nmi.h
> index aa3cd08..b2fa444 100644
> --- a/include/linux/nmi.h
> +++ b/include/linux/nmi.h
> @@ -141,6 +141,7 @@ static inline bool trigger_single_cpu_backtrace(int cpu)
>
>  #ifdef CONFIG_LOCKUP_DETECTOR
>  u64 hw_nmi_get_sample_period(int watchdog_thresh);
> +int hw_nmi_get_event(void);
>  extern int nmi_watchdog_enabled;
>  extern int soft_watchdog_enabled;
>  extern int watchdog_user_enabled;
> diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
> index 54a427d..f899766 100644
> --- a/kernel/watchdog_hld.c
> +++ b/kernel/watchdog_hld.c
> @@ -70,6 +70,12 @@ void touch_nmi_watchdog(void)
>  }
>  EXPORT_SYMBOL(touch_nmi_watchdog);
>
> +/* Can be overridden by architecture */
> +__weak int hw_nmi_get_event(void)
> +{
> +       return PERF_COUNT_HW_CPU_CYCLES;
> +}
> +
>  static struct perf_event_attr wd_hw_attr = {
>         .type           = PERF_TYPE_HARDWARE,
>         .config         = PERF_COUNT_HW_CPU_CYCLES,
> @@ -165,6 +171,7 @@ int watchdog_nmi_enable(unsigned int cpu)
>
>         wd_attr = &wd_hw_attr;
>         wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh);
> +       wd_attr->config = hw_nmi_get_event();
>
>         /* Try to register using hardware perf events */
>         event = perf_event_create_kernel_counter(wd_attr, cpu, NULL, watchdog_overflow_callback, NULL);
> --
> 2.7.4
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ