lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1338dd77-e9c1-4eac-9d0f-195829acdd2a@linux.intel.com>
Date: Thu, 6 Feb 2025 10:47:26 +0800
From: "Mi, Dapeng" <dapeng1.mi@...ux.intel.com>
To: "Liang, Kan" <kan.liang@...ux.intel.com>,
 Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
 Arnaldo Carvalho de Melo <acme@...nel.org>,
 Namhyung Kim <namhyung@...nel.org>, Ian Rogers <irogers@...gle.com>,
 Adrian Hunter <adrian.hunter@...el.com>,
 Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
 Andi Kleen <ak@...ux.intel.com>, Eranian Stephane <eranian@...gle.com>
Cc: linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
 Dapeng Mi <dapeng1.mi@...el.com>
Subject: Re: [PATCH 11/20] perf/x86/intel: Setup PEBS constraints base on
 counter & pdist map


On 1/28/2025 12:07 AM, Liang, Kan wrote:
>
> On 2025-01-23 9:07 a.m., Dapeng Mi wrote:
>> arch-PEBS provides CPUIDs to enumerate which counters support PEBS
>> sampling and precise distribution PEBS sampling. Thus PEBS constraints
>> can be dynamically configured base on these counter and precise
>> distribution bitmap instead of defining them statically.
>>
>> Signed-off-by: Dapeng Mi <dapeng1.mi@...ux.intel.com>
>> ---
>>  arch/x86/events/intel/core.c | 20 ++++++++++++++++++++
>>  arch/x86/events/intel/ds.c   |  1 +
>>  2 files changed, 21 insertions(+)
>>
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index 7775e1e1c1e9..0f1be36113fa 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
>> @@ -3728,6 +3728,7 @@ intel_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
>>  			    struct perf_event *event)
>>  {
>>  	struct event_constraint *c1, *c2;
>> +	struct pmu *pmu = event->pmu;
>>  
>>  	c1 = cpuc->event_constraint[idx];
>>  
>> @@ -3754,6 +3755,25 @@ intel_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
>>  		c2->weight = hweight64(c2->idxmsk64);
>>  	}
>>  
>> +	if (x86_pmu.arch_pebs && event->attr.precise_ip) {
>> +		u64 pebs_cntrs_mask;
>> +		u64 cntrs_mask;
>> +
>> +		if (event->attr.precise_ip >= 3)
>> +			pebs_cntrs_mask = hybrid(pmu, arch_pebs_cap).pdists;
>> +		else
>> +			pebs_cntrs_mask = hybrid(pmu, arch_pebs_cap).counters;
>> +
>> +		cntrs_mask = hybrid(pmu, fixed_cntr_mask64) << INTEL_PMC_IDX_FIXED |
>> +			     hybrid(pmu, cntr_mask64);
>> +
>> +		if (pebs_cntrs_mask != cntrs_mask) {
>> +			c2 = dyn_constraint(cpuc, c2, idx);
>> +			c2->idxmsk64 &= pebs_cntrs_mask;
>> +			c2->weight = hweight64(c2->idxmsk64);
>> +		}
>> +	}
> The pebs_cntrs_mask and cntrs_mask wouldn't be changed since the machine
> boot. I don't think it's efficient to calculate them every time.
>
> Maybe adding a local pebs_event_constraints_pdist[] and update both
> pebs_event_constraints[] and pebs_event_constraints_pdist[] with the
> enumerated mask at initialization time.
>
> Update the intel_pebs_constraints() to utilize the corresponding array
> according to the precise_ip.
>
> The above may be avoided.

Even we have these two arrays, we still need the dynamic constraint, right?
We can't predict what the event is, the event may be mapped to a quite
specific event constraint and we can know it in advance.


>
> Thanks,
> Kan
>
>> +
>>  	return c2;
>>  }
>>  
>> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
>> index 2f2c6b7c801b..a573ce0e576a 100644
>> --- a/arch/x86/events/intel/ds.c
>> +++ b/arch/x86/events/intel/ds.c
>> @@ -2941,6 +2941,7 @@ static void __init intel_arch_pebs_init(void)
>>  	x86_pmu.pebs_buffer_size = PEBS_BUFFER_SIZE;
>>  	x86_pmu.drain_pebs = intel_pmu_drain_arch_pebs;
>>  	x86_pmu.pebs_capable = ~0ULL;
>> +	x86_pmu.flags |= PMU_FL_PEBS_ALL;
>>  }
>>  
>>  /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ