[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d5b0e6ab-aff0-48be-b0a3-5a04bf328ab8@linux.intel.com>
Date: Wed, 5 Mar 2025 09:34:32 +0800
From: "Mi, Dapeng" <dapeng1.mi@...ux.intel.com>
To: "Liang, Kan" <kan.liang@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, Arnaldo Carvalho de Melo
<acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>, Eranian Stephane <eranian@...gle.com>,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
Dapeng Mi <dapeng1.mi@...el.com>
Subject: Re: [Patch v2 18/24] perf/x86/intel: Support arch-PEBS vector
registers group capturing
On 3/5/2025 12:26 AM, Liang, Kan wrote:
>
> On 2025-03-03 10:08 p.m., Mi, Dapeng wrote:
>> On 2/27/2025 2:40 PM, Mi, Dapeng wrote:
>>> On 2/26/2025 4:08 PM, Mi, Dapeng wrote:
>>>> On 2/25/2025 11:32 PM, Peter Zijlstra wrote:
>>>>> On Tue, Feb 18, 2025 at 03:28:12PM +0000, Dapeng Mi wrote:
>>>>>> Add x86/intel specific vector register (VECR) group capturing for
>>>>>> arch-PEBS. Enable corresponding VECR group bits in
>>>>>> GPx_CFG_C/FX0_CFG_C MSRs if users configures these vector registers
>>>>>> bitmap in perf_event_attr and parse VECR group in arch-PEBS record.
>>>>>>
>>>>>> Currently vector registers capturing is only supported by PEBS based
>>>>>> sampling, PMU driver would return error if PMI based sampling tries to
>>>>>> capture these vector registers.
>>>>>> @@ -676,6 +709,32 @@ int x86_pmu_hw_config(struct perf_event *event)
>>>>>> return -EINVAL;
>>>>>> }
>>>>>>
>>>>>> + /*
>>>>>> + * Architectural PEBS supports to capture more vector registers besides
>>>>>> + * XMM registers, like YMM, OPMASK and ZMM registers.
>>>>>> + */
>>>>>> + if (unlikely(has_more_extended_regs(event))) {
>>>>>> + u64 caps = hybrid(event->pmu, arch_pebs_cap).caps;
>>>>>> +
>>>>>> + if (!(event->pmu->capabilities & PERF_PMU_CAP_MORE_EXT_REGS))
>>>>>> + return -EINVAL;
>>>>>> +
>>>>>> + if (has_opmask_regs(event) && !(caps & ARCH_PEBS_VECR_OPMASK))
>>>>>> + return -EINVAL;
>>>>>> +
>>>>>> + if (has_ymmh_regs(event) && !(caps & ARCH_PEBS_VECR_YMM))
>>>>>> + return -EINVAL;
>>>>>> +
>>>>>> + if (has_zmmh_regs(event) && !(caps & ARCH_PEBS_VECR_ZMMH))
>>>>>> + return -EINVAL;
>>>>>> +
>>>>>> + if (has_h16zmm_regs(event) && !(caps & ARCH_PEBS_VECR_H16ZMM))
>>>>>> + return -EINVAL;
>>>>>> +
>>>>>> + if (!event->attr.precise_ip)
>>>>>> + return -EINVAL;
>>>>>> + }
>>>>>> +
>>>>>> return x86_setup_perfctr(event);
>>>>>> }
>>>>>>
>>>>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>>>>>> index f21d9f283445..8ef5b9a05fcc 100644
>>>>>> --- a/arch/x86/events/intel/core.c
>>>>>> +++ b/arch/x86/events/intel/core.c
>>>>>> @@ -2963,6 +2963,18 @@ static void intel_pmu_enable_event_ext(struct perf_event *event)
>>>>>> if (pebs_data_cfg & PEBS_DATACFG_XMMS)
>>>>>> ext |= ARCH_PEBS_VECR_XMM & cap.caps;
>>>>>>
>>>>>> + if (pebs_data_cfg & PEBS_DATACFG_YMMS)
>>>>>> + ext |= ARCH_PEBS_VECR_YMM & cap.caps;
>>>>>> +
>>>>>> + if (pebs_data_cfg & PEBS_DATACFG_OPMASKS)
>>>>>> + ext |= ARCH_PEBS_VECR_OPMASK & cap.caps;
>>>>>> +
>>>>>> + if (pebs_data_cfg & PEBS_DATACFG_ZMMHS)
>>>>>> + ext |= ARCH_PEBS_VECR_ZMMH & cap.caps;
>>>>>> +
>>>>>> + if (pebs_data_cfg & PEBS_DATACFG_H16ZMMS)
>>>>>> + ext |= ARCH_PEBS_VECR_H16ZMM & cap.caps;
>>>>>> +
>>>>>> if (pebs_data_cfg & PEBS_DATACFG_LBRS)
>>>>>> ext |= ARCH_PEBS_LBR & cap.caps;
>>>>>>
>>>>>> @@ -5115,6 +5127,9 @@ static inline void __intel_update_pmu_caps(struct pmu *pmu)
>>>>>>
>>>>>> if (hybrid(pmu, arch_pebs_cap).caps & ARCH_PEBS_VECR_XMM)
>>>>>> dest_pmu->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
>>>>>> +
>>>>>> + if (hybrid(pmu, arch_pebs_cap).caps & ARCH_PEBS_VECR_EXT)
>>>>>> + dest_pmu->capabilities |= PERF_PMU_CAP_MORE_EXT_REGS;
>>>>>> }
>>>>> There is no technical reason for it to error out, right? We can use
>>>>> FPU/XSAVE interface to read the CPU state just fine.
>>>> I think it's not because of technical reason. Let me confirm if we can add
>>>> it for non-PEBS sampling.
>>> Hi Peter,
>>>
>>> Just double confirm, you want only PEBS sampling supports to capture SSP
>>> and these vector registers for both *interrupt* and *user space*? or
>>> further, you want PMI based sampling can also support to capture SSP and
>>> these vector registers? Thanks.
> I think one of the main reasons to add the vector registers into PEBS
> records is because of the large PEBS. So perf can get all the interested
> registers and avoid a PMI for each sample.
> Technically, I don't think there is a problem supporting them in
> non-PEBS PMI sampling. But I'm not sure if it's useful in practice.
>
> The REGS_USER should be more useful. The large PEBS is also available as
> long as exclude_kernel.
>
> In my opinion, we may only support the new vector registers for both
> REGS_USER and REGS_INTR with PEBS events for now. We can add the support
> for non-PEBS events later if there is a requirement.
Yes, agree. I plan to support these new added registers for both REGS_USER
and REGS_INTR in v3 version. If someone has different opinion, please let
me know. Thanks.
>
> Thanks,
> Kan
>
>> Hi Peter,
>>
>> May I know your opinion on this? Thanks.
>>
>>
>>>>>> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
>>>>>> index 4b01beee15f4..7e5a4202de37 100644
>>>>>> --- a/arch/x86/events/intel/ds.c
>>>>>> +++ b/arch/x86/events/intel/ds.c
>>>>>> @@ -1437,9 +1438,37 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
>>>>>> if (gprs || (attr->precise_ip < 2) || tsx_weight)
>>>>>> pebs_data_cfg |= PEBS_DATACFG_GP;
>>>>>>
>>>>>> - if ((sample_type & PERF_SAMPLE_REGS_INTR) &&
>>>>>> - (attr->sample_regs_intr & PERF_REG_EXTENDED_MASK))
>>>>>> - pebs_data_cfg |= PEBS_DATACFG_XMMS;
>>>>>> + if (sample_type & PERF_SAMPLE_REGS_INTR) {
>>>>>> + if (attr->sample_regs_intr & PERF_REG_EXTENDED_MASK)
>>>>>> + pebs_data_cfg |= PEBS_DATACFG_XMMS;
>>>>>> +
>>>>>> + for_each_set_bit_from(bit,
>>>>>> + (unsigned long *)event->attr.sample_regs_intr_ext,
>>>>>> + PERF_NUM_EXT_REGS) {
>>>>> This is indented wrong; please use cino=(0:0
>>>>> if you worry about indentation depth, break out in helper function.
>>>> Sure. would modify it.
>>>>
>>>>
>>>>>> + switch (bit + PERF_REG_EXTENDED_OFFSET) {
>>>>>> + case PERF_REG_X86_OPMASK0 ... PERF_REG_X86_OPMASK7:
>>>>>> + pebs_data_cfg |= PEBS_DATACFG_OPMASKS;
>>>>>> + bit = PERF_REG_X86_YMMH0 -
>>>>>> + PERF_REG_EXTENDED_OFFSET - 1;
>>>>>> + break;
>>>>>> + case PERF_REG_X86_YMMH0 ... PERF_REG_X86_ZMMH0 - 1:
>>>>>> + pebs_data_cfg |= PEBS_DATACFG_YMMS;
>>>>>> + bit = PERF_REG_X86_ZMMH0 -
>>>>>> + PERF_REG_EXTENDED_OFFSET - 1;
>>>>>> + break;
>>>>>> + case PERF_REG_X86_ZMMH0 ... PERF_REG_X86_ZMM16 - 1:
>>>>>> + pebs_data_cfg |= PEBS_DATACFG_ZMMHS;
>>>>>> + bit = PERF_REG_X86_ZMM16 -
>>>>>> + PERF_REG_EXTENDED_OFFSET - 1;
>>>>>> + break;
>>>>>> + case PERF_REG_X86_ZMM16 ... PERF_REG_X86_ZMM_MAX - 1:
>>>>>> + pebs_data_cfg |= PEBS_DATACFG_H16ZMMS;
>>>>>> + bit = PERF_REG_X86_ZMM_MAX -
>>>>>> + PERF_REG_EXTENDED_OFFSET - 1;
>>>>>> + break;
>>>>>> + }
>>>>>> + }
>>>>>> + }
>>>>>>
>>>>>> if (sample_type & PERF_SAMPLE_BRANCH_STACK) {
>>>>>> /*
>
Powered by blists - more mailing lists