lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <71c0f66a-9ee8-4c01-8a29-2c6faf015b4d@linux.intel.com>
Date: Tue, 4 Mar 2025 11:08:41 +0800
From: "Mi, Dapeng" <dapeng1.mi@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, Arnaldo Carvalho de Melo
 <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
 Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>,
 Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
 Kan Liang <kan.liang@...ux.intel.com>, Andi Kleen <ak@...ux.intel.com>,
 Eranian Stephane <eranian@...gle.com>, linux-kernel@...r.kernel.org,
 linux-perf-users@...r.kernel.org, Dapeng Mi <dapeng1.mi@...el.com>
Subject: Re: [Patch v2 18/24] perf/x86/intel: Support arch-PEBS vector
 registers group capturing


On 2/27/2025 2:40 PM, Mi, Dapeng wrote:
> On 2/26/2025 4:08 PM, Mi, Dapeng wrote:
>> On 2/25/2025 11:32 PM, Peter Zijlstra wrote:
>>> On Tue, Feb 18, 2025 at 03:28:12PM +0000, Dapeng Mi wrote:
>>>> Add x86/intel specific vector register (VECR) group capturing for
>>>> arch-PEBS. Enable corresponding VECR group bits in
>>>> GPx_CFG_C/FX0_CFG_C MSRs if users configures these vector registers
>>>> bitmap in perf_event_attr and parse VECR group in arch-PEBS record.
>>>>
>>>> Currently vector registers capturing is only supported by PEBS based
>>>> sampling, PMU driver would return error if PMI based sampling tries to
>>>> capture these vector registers.
>>>> @@ -676,6 +709,32 @@ int x86_pmu_hw_config(struct perf_event *event)
>>>>  			return -EINVAL;
>>>>  	}
>>>>  
>>>> +	/*
>>>> +	 * Architectural PEBS supports to capture more vector registers besides
>>>> +	 * XMM registers, like YMM, OPMASK and ZMM registers.
>>>> +	 */
>>>> +	if (unlikely(has_more_extended_regs(event))) {
>>>> +		u64 caps = hybrid(event->pmu, arch_pebs_cap).caps;
>>>> +
>>>> +		if (!(event->pmu->capabilities & PERF_PMU_CAP_MORE_EXT_REGS))
>>>> +			return -EINVAL;
>>>> +
>>>> +		if (has_opmask_regs(event) && !(caps & ARCH_PEBS_VECR_OPMASK))
>>>> +			return -EINVAL;
>>>> +
>>>> +		if (has_ymmh_regs(event) && !(caps & ARCH_PEBS_VECR_YMM))
>>>> +			return -EINVAL;
>>>> +
>>>> +		if (has_zmmh_regs(event) && !(caps & ARCH_PEBS_VECR_ZMMH))
>>>> +			return -EINVAL;
>>>> +
>>>> +		if (has_h16zmm_regs(event) && !(caps & ARCH_PEBS_VECR_H16ZMM))
>>>> +			return -EINVAL;
>>>> +
>>>> +		if (!event->attr.precise_ip)
>>>> +			return -EINVAL;
>>>> +	}
>>>> +
>>>>  	return x86_setup_perfctr(event);
>>>>  }
>>>>  
>>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>>>> index f21d9f283445..8ef5b9a05fcc 100644
>>>> --- a/arch/x86/events/intel/core.c
>>>> +++ b/arch/x86/events/intel/core.c
>>>> @@ -2963,6 +2963,18 @@ static void intel_pmu_enable_event_ext(struct perf_event *event)
>>>>  			if (pebs_data_cfg & PEBS_DATACFG_XMMS)
>>>>  				ext |= ARCH_PEBS_VECR_XMM & cap.caps;
>>>>  
>>>> +			if (pebs_data_cfg & PEBS_DATACFG_YMMS)
>>>> +				ext |= ARCH_PEBS_VECR_YMM & cap.caps;
>>>> +
>>>> +			if (pebs_data_cfg & PEBS_DATACFG_OPMASKS)
>>>> +				ext |= ARCH_PEBS_VECR_OPMASK & cap.caps;
>>>> +
>>>> +			if (pebs_data_cfg & PEBS_DATACFG_ZMMHS)
>>>> +				ext |= ARCH_PEBS_VECR_ZMMH & cap.caps;
>>>> +
>>>> +			if (pebs_data_cfg & PEBS_DATACFG_H16ZMMS)
>>>> +				ext |= ARCH_PEBS_VECR_H16ZMM & cap.caps;
>>>> +
>>>>  			if (pebs_data_cfg & PEBS_DATACFG_LBRS)
>>>>  				ext |= ARCH_PEBS_LBR & cap.caps;
>>>>  
>>>> @@ -5115,6 +5127,9 @@ static inline void __intel_update_pmu_caps(struct pmu *pmu)
>>>>  
>>>>  	if (hybrid(pmu, arch_pebs_cap).caps & ARCH_PEBS_VECR_XMM)
>>>>  		dest_pmu->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
>>>> +
>>>> +	if (hybrid(pmu, arch_pebs_cap).caps & ARCH_PEBS_VECR_EXT)
>>>> +		dest_pmu->capabilities |= PERF_PMU_CAP_MORE_EXT_REGS;
>>>>  }
>>> There is no technical reason for it to error out, right? We can use
>>> FPU/XSAVE interface to read the CPU state just fine.
>> I think it's not because of technical reason. Let me confirm if we can add
>> it for non-PEBS sampling.
> Hi Peter,
>
> Just double confirm, you want only PEBS sampling supports to capture SSP
> and these vector registers for both *interrupt* and *user space*? or
> further, you want PMI based sampling can also support to capture SSP and
> these vector registers? Thanks.

Hi Peter,

May I know your opinion on this? Thanks.


>
>>
>>>> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
>>>> index 4b01beee15f4..7e5a4202de37 100644
>>>> --- a/arch/x86/events/intel/ds.c
>>>> +++ b/arch/x86/events/intel/ds.c
>>>> @@ -1437,9 +1438,37 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
>>>>  	if (gprs || (attr->precise_ip < 2) || tsx_weight)
>>>>  		pebs_data_cfg |= PEBS_DATACFG_GP;
>>>>  
>>>> -	if ((sample_type & PERF_SAMPLE_REGS_INTR) &&
>>>> -	    (attr->sample_regs_intr & PERF_REG_EXTENDED_MASK))
>>>> -		pebs_data_cfg |= PEBS_DATACFG_XMMS;
>>>> +	if (sample_type & PERF_SAMPLE_REGS_INTR) {
>>>> +		if (attr->sample_regs_intr & PERF_REG_EXTENDED_MASK)
>>>> +			pebs_data_cfg |= PEBS_DATACFG_XMMS;
>>>> +
>>>> +		for_each_set_bit_from(bit,
>>>> +			(unsigned long *)event->attr.sample_regs_intr_ext,
>>>> +			PERF_NUM_EXT_REGS) {
>>> This is indented wrong; please use cino=(0:0
>>> if you worry about indentation depth, break out in helper function.
>> Sure. would modify it.
>>
>>
>>>> +			switch (bit + PERF_REG_EXTENDED_OFFSET) {
>>>> +			case PERF_REG_X86_OPMASK0 ... PERF_REG_X86_OPMASK7:
>>>> +				pebs_data_cfg |= PEBS_DATACFG_OPMASKS;
>>>> +				bit = PERF_REG_X86_YMMH0 -
>>>> +				      PERF_REG_EXTENDED_OFFSET - 1;
>>>> +				break;
>>>> +			case PERF_REG_X86_YMMH0 ... PERF_REG_X86_ZMMH0 - 1:
>>>> +				pebs_data_cfg |= PEBS_DATACFG_YMMS;
>>>> +				bit = PERF_REG_X86_ZMMH0 -
>>>> +				      PERF_REG_EXTENDED_OFFSET - 1;
>>>> +				break;
>>>> +			case PERF_REG_X86_ZMMH0 ... PERF_REG_X86_ZMM16 - 1:
>>>> +				pebs_data_cfg |= PEBS_DATACFG_ZMMHS;
>>>> +				bit = PERF_REG_X86_ZMM16 -
>>>> +				      PERF_REG_EXTENDED_OFFSET - 1;
>>>> +				break;
>>>> +			case PERF_REG_X86_ZMM16 ... PERF_REG_X86_ZMM_MAX - 1:
>>>> +				pebs_data_cfg |= PEBS_DATACFG_H16ZMMS;
>>>> +				bit = PERF_REG_X86_ZMM_MAX -
>>>> +				      PERF_REG_EXTENDED_OFFSET - 1;
>>>> +				break;
>>>> +			}
>>>> +		}
>>>> +	}
>>>>  
>>>>  	if (sample_type & PERF_SAMPLE_BRANCH_STACK) {
>>>>  		/*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ