[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251106145217.GA4067720@noisy.programming.kicks-ass.net>
Date: Thu, 6 Nov 2025 15:52:17 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Dapeng Mi <dapeng1.mi@...ux.intel.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Namhyung Kim <namhyung@...nel.org>, Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
Eranian Stephane <eranian@...gle.com>, linux-kernel@...r.kernel.org,
linux-perf-users@...r.kernel.org, Dapeng Mi <dapeng1.mi@...el.com>,
Zide Chen <zide.chen@...el.com>,
Falcon Thomas <thomas.falcon@...el.com>,
Xudong Hao <xudong.hao@...el.com>
Subject: Re: [Patch v9 10/12] perf/x86/intel: Update dyn_constranit base on
PEBS event precise level
On Wed, Oct 29, 2025 at 06:21:34PM +0800, Dapeng Mi wrote:
> arch-PEBS provides CPUIDs to enumerate which counters support PEBS
> sampling and precise distribution PEBS sampling. Thus PEBS constraints
> should be dynamically configured base on these counter and precise
> distribution bitmap instead of defining them statically.
>
> Update event dyn_constraint base on PEBS event precise level.
What happened to this:
https://lore.kernel.org/all/e0b25b3e-aec0-4c43-9ab2-907186b56c71@linux.intel.com/
> Signed-off-by: Dapeng Mi <dapeng1.mi@...ux.intel.com>
> ---
> arch/x86/events/intel/core.c | 11 +++++++++++
> arch/x86/events/intel/ds.c | 1 +
> 2 files changed, 12 insertions(+)
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 6e04d73dfae5..40ccfd80d554 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -4252,6 +4252,8 @@ static int intel_pmu_hw_config(struct perf_event *event)
> }
>
> if (event->attr.precise_ip) {
> + struct arch_pebs_cap pebs_cap = hybrid(event->pmu, arch_pebs_cap);
> +
> if ((event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_FIXED_VLBR_EVENT)
> return -EINVAL;
>
> @@ -4265,6 +4267,15 @@ static int intel_pmu_hw_config(struct perf_event *event)
> }
> if (x86_pmu.pebs_aliases)
> x86_pmu.pebs_aliases(event);
> +
> + if (x86_pmu.arch_pebs) {
> + u64 cntr_mask = hybrid(event->pmu, intel_ctrl) &
> + ~GLOBAL_CTRL_EN_PERF_METRICS;
> + u64 pebs_mask = event->attr.precise_ip >= 3 ?
> + pebs_cap.pdists : pebs_cap.counters;
> + if (cntr_mask != pebs_mask)
> + event->hw.dyn_constraint &= pebs_mask;
> + }
> }
>
> if (needs_branch_stack(event)) {
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index 5c26a5235f94..1179980f795b 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -3005,6 +3005,7 @@ static void __init intel_arch_pebs_init(void)
> x86_pmu.pebs_buffer_size = PEBS_BUFFER_SIZE;
> x86_pmu.drain_pebs = intel_pmu_drain_arch_pebs;
> x86_pmu.pebs_capable = ~0ULL;
> + x86_pmu.flags |= PMU_FL_PEBS_ALL;
>
> x86_pmu.pebs_enable = __intel_pmu_pebs_enable;
> x86_pmu.pebs_disable = __intel_pmu_pebs_disable;
> --
> 2.34.1
>
Powered by blists - more mailing lists