[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <96037040-3b00-4d9a-9ff0-568b7b7b4f30@linux.intel.com>
Date: Tue, 9 Jul 2024 11:04:07 -0400
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Jacob Pan <jacob.jun.pan@...ux.intel.com>, X86 Kernel <x86@...nel.org>,
Sean Christopherson <seanjc@...gle.com>, LKML
<linux-kernel@...r.kernel.org>, Thomas Gleixner <tglx@...utronix.de>,
Dave Hansen <dave.hansen@...el.com>, "H. Peter Anvin" <hpa@...or.com>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Xin Li <xin3.li@...el.com>, linux-perf-users@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Tony Luck <tony.luck@...el.com>,
Andy Lutomirski <luto@...nel.org>, acme@...nel.org,
Andi Kleen <andi.kleen@...el.com>, Nikolay Borisov <nik.borisov@...e.com>,
"Mehta, Sohil" <sohil.mehta@...el.com>, Zeng Guang <guang.zeng@...el.com>
Subject: Re: [PATCH v4 08/11] perf/x86: Enable NMI source reporting for
perfmon
On 2024-07-09 10:39 a.m., Jacob Pan wrote:
> Program the designated NMI source vector into the performance monitoring
> interrupt (PMI) of the local vector table. PMI handler will be directly
> invoked when its NMI is generated. This avoids the latency of calling all
> NMI handlers blindly.
>
> Co-developed-by: Zeng Guang <guang.zeng@...el.com>
> Signed-off-by: Zeng Guang <guang.zeng@...el.com>
> Signed-off-by: Jacob Pan <jacob.jun.pan@...ux.intel.com>
>
Reviewed-by: Kan Liang <kan.liang@...ux.intel.com>
Thanks,
Kan
> ---
> v4: Use a macro for programming PVTPC unconditionally (Kan)
> v3: Program NMI source vector in PVTPC unconditionally (HPA)
> v2: Fix a compile error apic_perfmon_ctr is undefined in i386 config
> ---
> arch/x86/events/core.c | 4 ++--
> arch/x86/events/intel/core.c | 6 +++---
> arch/x86/include/asm/apic.h | 2 ++
> 3 files changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 1ef2201e48ac..e69c52f9d662 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -1680,7 +1680,7 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
> * This generic handler doesn't seem to have any issues where the
> * unmasking occurs so it was left at the top.
> */
> - apic_write(APIC_LVTPC, APIC_DM_NMI);
> + apic_write(APIC_LVTPC, APIC_PERF_NMI);
>
> for (idx = 0; idx < x86_pmu.num_counters; idx++) {
> if (!test_bit(idx, cpuc->active_mask))
> @@ -1723,7 +1723,7 @@ void perf_events_lapic_init(void)
> /*
> * Always use NMI for PMU
> */
> - apic_write(APIC_LVTPC, APIC_DM_NMI);
> + apic_write(APIC_LVTPC, APIC_PERF_NMI);
> }
>
> static int
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 38c1b1f1deaa..e7e114616e24 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3093,7 +3093,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
> * NMI handler.
> */
> if (!late_ack && !mid_ack)
> - apic_write(APIC_LVTPC, APIC_DM_NMI);
> + apic_write(APIC_LVTPC, APIC_PERF_NMI);
> intel_bts_disable_local();
> cpuc->enabled = 0;
> __intel_pmu_disable_all(true);
> @@ -3130,7 +3130,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
>
> done:
> if (mid_ack)
> - apic_write(APIC_LVTPC, APIC_DM_NMI);
> + apic_write(APIC_LVTPC, APIC_PERF_NMI);
> /* Only restore PMU state when it's active. See x86_pmu_disable(). */
> cpuc->enabled = pmu_enabled;
> if (pmu_enabled)
> @@ -3143,7 +3143,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
> * Haswell CPUs.
> */
> if (late_ack)
> - apic_write(APIC_LVTPC, APIC_DM_NMI);
> + apic_write(APIC_LVTPC, APIC_PERF_NMI);
> return handled;
> }
>
> diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
> index 9327eb00e96d..d284eff7849c 100644
> --- a/arch/x86/include/asm/apic.h
> +++ b/arch/x86/include/asm/apic.h
> @@ -30,6 +30,8 @@
> #define APIC_EXTNMI_ALL 1
> #define APIC_EXTNMI_NONE 2
>
> +#define APIC_PERF_NMI (APIC_DM_NMI | NMI_SOURCE_VEC_PMI)
> +
> /*
> * Define the default level of output to be very little
> * This can be turned up by using apic=verbose for more
Powered by blists - more mailing lists