[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2b020ea0-7b1f-034c-10dd-f38721776163@linux.intel.com>
Date: Tue, 16 Aug 2022 08:04:45 -0400
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Like Xu <like.xu.linux@...il.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Namhyung Kim <namhyung@...nel.org>, x86@...nel.org,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf/x86/core: Set pebs_capable and PMU_FL_PEBS_ALL for
the Baseline
On 2022-08-16 7:40 a.m., Like Xu wrote:
> From: Peter Zijlstra <peterz@...radead.org>
>
> The SDM explicitly states that PEBS Baseline implies Extended PEBS.
> For cpu model forward compatibility (e.g. on ICX, SPR, ADL), it's
> safe to stop doing FMS table thing such as setting pebs_capable and
> PMU_FL_PEBS_ALL since it's already set in the intel_ds_init().
>
> The Goldmont Plus is the only platform which supports extended PEBS
> but doesn't have Baseline. Keep the status quo.
>
> Cc: Kan Liang <kan.liang@...ux.intel.com>
> Reported-by: Like Xu <likexu@...cent.com>
> Signed-off-by: Peter Zijlstra <peterz@...radead.org>
Reviewed-by: Kan Liang <kan.liang@...ux.intel.com>
Thanks,
Kan
> ---
> arch/x86/events/intel/core.c | 4 ----
> arch/x86/events/intel/ds.c | 1 +
> 2 files changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 2db93498ff71..cb98a05ee743 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -6291,10 +6291,8 @@ __init int intel_pmu_init(void)
> x86_pmu.pebs_aliases = NULL;
> x86_pmu.pebs_prec_dist = true;
> x86_pmu.pebs_block = true;
> - x86_pmu.pebs_capable = ~0ULL;
> x86_pmu.flags |= PMU_FL_HAS_RSP_1;
> x86_pmu.flags |= PMU_FL_NO_HT_SHARING;
> - x86_pmu.flags |= PMU_FL_PEBS_ALL;
> x86_pmu.flags |= PMU_FL_INSTR_LATENCY;
> x86_pmu.flags |= PMU_FL_MEM_LOADS_AUX;
>
> @@ -6337,10 +6335,8 @@ __init int intel_pmu_init(void)
> x86_pmu.pebs_aliases = NULL;
> x86_pmu.pebs_prec_dist = true;
> x86_pmu.pebs_block = true;
> - x86_pmu.pebs_capable = ~0ULL;
> x86_pmu.flags |= PMU_FL_HAS_RSP_1;
> x86_pmu.flags |= PMU_FL_NO_HT_SHARING;
> - x86_pmu.flags |= PMU_FL_PEBS_ALL;
> x86_pmu.flags |= PMU_FL_INSTR_LATENCY;
> x86_pmu.flags |= PMU_FL_MEM_LOADS_AUX;
> x86_pmu.lbr_pt_coexist = true;
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index ba60427caa6d..ac6dd4c96dbc 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -2262,6 +2262,7 @@ void __init intel_ds_init(void)
> PERF_SAMPLE_BRANCH_STACK |
> PERF_SAMPLE_TIME;
> x86_pmu.flags |= PMU_FL_PEBS_ALL;
> + x86_pmu.pebs_capable = ~0ULL;
> pebs_qual = "-baseline";
> x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
> } else {
Powered by blists - more mailing lists