[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <afdc5b07-9795-4049-8941-c2e3d2bbaa87@linux.intel.com>
Date: Thu, 27 Feb 2025 09:06:45 -0500
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Dapeng Mi <dapeng1.mi@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Namhyung Kim <namhyung@...nel.org>, Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>, Eranian Stephane <eranian@...gle.com>
Cc: linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
Dapeng Mi <dapeng1.mi@...el.com>
Subject: Re: [Patch v2 13/24] perf/x86/intel: Update dyn_constranit base on
PEBS event precise level
On 2025-02-18 10:28 a.m., Dapeng Mi wrote:
> arch-PEBS provides CPUIDs to enumerate which counters support PEBS
> sampling and precise distribution PEBS sampling. Thus PEBS constraints
> should be dynamically configured base on these counter and precise
> distribution bitmap instead of defining them statically.
>
> Update event dyn_constraint base on PEBS event precise level.
>
> Signed-off-by: Dapeng Mi <dapeng1.mi@...ux.intel.com>
> ---
> arch/x86/events/intel/core.c | 6 ++++++
> arch/x86/events/intel/ds.c | 1 +
> 2 files changed, 7 insertions(+)
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 472366c3db22..c777e0531d40 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -4033,6 +4033,8 @@ static int intel_pmu_hw_config(struct perf_event *event)
> return ret;
>
> if (event->attr.precise_ip) {
> + struct arch_pebs_cap pebs_cap = hybrid(event->pmu, arch_pebs_cap);
> +
> if ((event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_FIXED_VLBR_EVENT)
> return -EINVAL;
>
> @@ -4046,6 +4048,10 @@ static int intel_pmu_hw_config(struct perf_event *event)
> }
> if (x86_pmu.pebs_aliases)
> x86_pmu.pebs_aliases(event);
> +
> + if (x86_pmu.arch_pebs)
> + event->hw.dyn_constraint = event->attr.precise_ip >= 3 ?
> + pebs_cap.pdists : pebs_cap.counters;
> }
The dyn_constraint is only required when the counter mask is different.
I think the pebs_cap.counters should be very likely the same as the
regular counter mask. Maybe something as below (not test).
if (x86_pmu.arch_pebs) {
u64 cntr_mask = event->attr.precise_ip >= 3 ?
pebs_cap.pdists : pebs_cap.counters;
if (cntr_mask != hybrid(event->pmu, intel_ctrl))
event->hw.dyn_constraint = cntr_mask;
}
Thanks,
Kan
>
> if (needs_branch_stack(event)) {
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index 519767fc9180..615aefb4e52e 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -2948,6 +2948,7 @@ static void __init intel_arch_pebs_init(void)
> x86_pmu.pebs_buffer_size = PEBS_BUFFER_SIZE;
> x86_pmu.drain_pebs = intel_pmu_drain_arch_pebs;
> x86_pmu.pebs_capable = ~0ULL;
> + x86_pmu.flags |= PMU_FL_PEBS_ALL;
>
> x86_pmu.pebs_enable = __intel_pmu_pebs_enable;
> x86_pmu.pebs_disable = __intel_pmu_pebs_disable;
Powered by blists - more mailing lists