[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <69c3b712-0e6b-65d9-a0f9-40d939cd9d54@intel.com>
Date: Tue, 18 May 2021 16:13:34 +0800
From: "Xu, Like" <like.xu@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, weijiang.yang@...el.com,
Kan Liang <kan.liang@...ux.intel.com>, ak@...ux.intel.com,
wei.w.wang@...el.com, eranian@...gle.com, liuxiangdong5@...wei.com,
linux-kernel@...r.kernel.org, x86@...nel.org, kvm@...r.kernel.org,
Like Xu <like.xu@...ux.intel.com>
Subject: Re: [PATCH v6 06/16] KVM: x86/pmu: Add IA32_PEBS_ENABLE MSR emulation
for extended PEBS
On 2021/5/17 16:33, Peter Zijlstra wrote:
> On Tue, May 11, 2021 at 10:42:04AM +0800, Like Xu wrote:
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index 2f89fd599842..c791765f4761 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
>> @@ -3898,31 +3898,49 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data)
>> struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>> struct perf_guest_switch_msr *arr = cpuc->guest_switch_msrs;
>> u64 intel_ctrl = hybrid(cpuc->pmu, intel_ctrl);
>> + u64 pebs_mask = (x86_pmu.flags & PMU_FL_PEBS_ALL) ?
>> + cpuc->pebs_enabled : (cpuc->pebs_enabled & PEBS_COUNTER_MASK);
>> - if (x86_pmu.flags & PMU_FL_PEBS_ALL)
>> - arr[0].guest &= ~cpuc->pebs_enabled;
>> - else
>> - arr[0].guest &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
>> - *nr = 1;
> Instead of endlessly mucking about with branches, do we want something
> like this instead?
Fine to me. How about the commit message for your below patch:
x86/perf/core: Add pebs_capable to store valid PEBS_COUNTER_MASK value
The value of pebs_counter_mask will be accessed frequently
for repeated use in the intel_guest_get_msrs(). So it can be
optimized instead of endlessly mucking about with branches.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
>
> ---
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 2521d03de5e0..bcfba11196c8 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -2819,10 +2819,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
> * counters from the GLOBAL_STATUS mask and we always process PEBS
> * events via drain_pebs().
> */
> - if (x86_pmu.flags & PMU_FL_PEBS_ALL)
> - status &= ~cpuc->pebs_enabled;
> - else
> - status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
> + status &= ~(cpuc->pebs_enabled & x86_pmu.pebs_capable);
>
> /*
> * PEBS overflow sets bit 62 in the global status register
> @@ -3862,10 +3859,7 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr)
> arr[0].msr = MSR_CORE_PERF_GLOBAL_CTRL;
> arr[0].host = intel_ctrl & ~cpuc->intel_ctrl_guest_mask;
> arr[0].guest = intel_ctrl & ~cpuc->intel_ctrl_host_mask;
> - if (x86_pmu.flags & PMU_FL_PEBS_ALL)
> - arr[0].guest &= ~cpuc->pebs_enabled;
> - else
> - arr[0].guest &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
> + arr[0].guest &= ~(cpuc->pebs_enabled & x86_pmu.pebs_capable);
> *nr = 1;
>
> if (x86_pmu.pebs && x86_pmu.pebs_no_isolation) {
> @@ -5546,6 +5540,7 @@ __init int intel_pmu_init(void)
> x86_pmu.events_mask_len = eax.split.mask_length;
>
> x86_pmu.max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, x86_pmu.num_counters);
> + x86_pmu.pebs_capable = PEBS_COUNTER_MASK;
>
> /*
> * Quirk: v2 perfmon does not report fixed-purpose events, so
> @@ -5730,6 +5725,7 @@ __init int intel_pmu_init(void)
> x86_pmu.pebs_aliases = NULL;
> x86_pmu.pebs_prec_dist = true;
> x86_pmu.lbr_pt_coexist = true;
> + x86_pmu.pebs_capable = ~0ULL;
> x86_pmu.flags |= PMU_FL_HAS_RSP_1;
> x86_pmu.flags |= PMU_FL_PEBS_ALL;
> x86_pmu.get_event_constraints = glp_get_event_constraints;
> @@ -6080,6 +6076,7 @@ __init int intel_pmu_init(void)
> x86_pmu.pebs_aliases = NULL;
> x86_pmu.pebs_prec_dist = true;
> x86_pmu.pebs_block = true;
> + x86_pmu.pebs_capable = ~0ULL;
> x86_pmu.flags |= PMU_FL_HAS_RSP_1;
> x86_pmu.flags |= PMU_FL_NO_HT_SHARING;
> x86_pmu.flags |= PMU_FL_PEBS_ALL;
> @@ -6123,6 +6120,7 @@ __init int intel_pmu_init(void)
> x86_pmu.pebs_aliases = NULL;
> x86_pmu.pebs_prec_dist = true;
> x86_pmu.pebs_block = true;
> + x86_pmu.pebs_capable = ~0ULL;
> x86_pmu.flags |= PMU_FL_HAS_RSP_1;
> x86_pmu.flags |= PMU_FL_NO_HT_SHARING;
> x86_pmu.flags |= PMU_FL_PEBS_ALL;
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index 27fa85e7d4fd..6f3cf81ccb1b 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -805,6 +805,7 @@ struct x86_pmu {
> void (*pebs_aliases)(struct perf_event *event);
> unsigned long large_pebs_flags;
> u64 rtm_abort_event;
> + u64 pebs_capable;
>
> /*
> * Intel LBR
Powered by blists - more mailing lists