[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y0W/YR6gXhunJYry@google.com>
Date: Tue, 11 Oct 2022 19:09:21 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Like Xu <like.xu.linux@...il.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Jim Mattson <jmattson@...gle.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v2 2/2] KVM: x86/pmu: Add PEBS support for SPR and future
non-hybird models
Kind of a nit, but I would prefer a shortlog that talks about the pdit/pdir
stuff and not a "enable PEBS" bucket.
On Thu, Sep 22, 2022, Like Xu wrote:
> @@ -140,6 +149,16 @@ static void kvm_perf_overflow(struct perf_event *perf_event,
> __kvm_perf_overflow(pmc, true);
> }
>
> +static bool need_max_precise(struct kvm_pmc *pmc)
> +{
> + if (pmc->idx == 0 && x86_match_cpu(vmx_pebs_pdist_cpu))
> + return true;
> + if (pmc->idx == 32 && x86_match_cpu(vmx_pebs_pdir_cpu))
> + return true;
> +
> + return false;
> +}
> +
> static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
> u64 config, bool exclude_user,
> bool exclude_kernel, bool intr)
> @@ -181,11 +200,11 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
> * the accuracy of the PEBS profiling result, because the "event IP"
> * in the PEBS record is calibrated on the guest side.
> *
> - * On Icelake everything is fine. Other hardware (GLC+, TNT+) that
> + * On Icelake everything is fine. Other hardware (TNT+) that
> * could possibly care here is unsupported and needs changes.
This part of the comment is still somewhat stale, and for me at least it's not at
all helpful.
> */
> attr.precise_ip = 1;
> - if (x86_match_cpu(vmx_icl_pebs_cpu) && pmc->idx == 32)
> + if (need_max_precise(pmc))
> attr.precise_ip = 3;
What about writing this as:
attr.precise_ip = pmc_get_pebs_precision(pmc);
(or whatever name is appropriate for "pebs_precision").
Then in the helper, add comments to explaint the magic numbers and the interaction
with PDIST and PDIR. Bonus points if #defines for the the magic numbers can be
added somewher
* 0 - SAMPLE_IP can have arbitrary skid
* 1 - SAMPLE_IP must have constant skid
* 2 - SAMPLE_IP requested to have 0 skid
* 3 - SAMPLE_IP must have 0 skid
static u64 pmc_get_pebs_precision(struct kvm_pmc *pmc)
{
/* Comment that explains why PDIST/PDIR require 0 skid? */
if ((pmc->idx == 0 && x86_match_cpu(vmx_pebs_pdist_cpu)) ||
(pmc->idx == 32 && x86_match_cpu(vmx_pebs_pdir_cpu)))
return 3;
/* Comment about constant skid? */
return 1;
}
> }
>
> diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
> index c5e5dfef69c7..4dc4bbe18821 100644
> --- a/arch/x86/kvm/vmx/capabilities.h
> +++ b/arch/x86/kvm/vmx/capabilities.h
> @@ -398,7 +398,9 @@ static inline bool vmx_pt_mode_is_host_guest(void)
>
> static inline bool vmx_pebs_supported(void)
> {
> - return boot_cpu_has(X86_FEATURE_PEBS) && kvm_pmu_cap.pebs_ept;
> + return boot_cpu_has(X86_FEATURE_PEBS) &&
> + !boot_cpu_has(X86_FEATURE_HYBRID_CPU) &&
This belongs in a separate patch, and it should be ordered before patch 1 so that
there's no window where KVM can see pebs_ept==1 on a hybrid CPU.
Actually, shouldn't everything in this patch land before core enabling?
> + kvm_pmu_cap.pebs_ept;
Please align indentation:
return boot_cpu_has(X86_FEATURE_PEBS) &&
!boot_cpu_has(X86_FEATURE_HYBRID_CPU) &&
kvm_pmu_cap.pebs_ept;
Powered by blists - more mailing lists