[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALMp9eSaqMf_Ww4yU3O5jaADWATe5ush5fziq-wWo22COyePmQ@mail.gmail.com>
Date: Tue, 6 Sep 2022 21:15:22 -0700
From: Jim Mattson <jmattson@...gle.com>
To: Like Xu <like.xu.linux@...il.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Sandipan Das <sandipan.das@....com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/4] KVM: x86/svm/pmu: Add AMD PerfMonV2 support
On Tue, Sep 6, 2022 at 8:50 PM Like Xu <like.xu.linux@...il.com> wrote:
>
> On 7/9/2022 4:19 am, Jim Mattson wrote:
> > On Tue, Sep 6, 2022 at 5:45 AM Like Xu <like.xu.linux@...il.com> wrote:
> >>
> >> On 6/9/2022 2:00 am, Jim Mattson wrote:
> >>> On Mon, Sep 5, 2022 at 5:44 AM Like Xu <like.xu.linux@...il.com> wrote:
> >>>>
> >>>> From: Like Xu <likexu@...cent.com>
> >>>>
> >>>> If AMD Performance Monitoring Version 2 (PerfMonV2) is detected
> >>>> by the guest, it can use a new scheme to manage the Core PMCs using
> >>>> the new global control and status registers.
> >>>>
> >>>> In addition to benefiting from the PerfMonV2 functionality in the same
> >>>> way as the host (higher precision), the guest also can reduce the number
> >>>> of vm-exits by lowering the total number of MSRs accesses.
> >>>>
> >>>> In terms of implementation details, amd_is_valid_msr() is resurrected
> >>>> since three newly added MSRs could not be mapped to one vPMC.
> >>>> The possibility of emulating PerfMonV2 on the mainframe has also
> >>>> been eliminated for reasons of precision.
> >>>>
> >>>> Co-developed-by: Sandipan Das <sandipan.das@....com>
> >>>> Signed-off-by: Sandipan Das <sandipan.das@....com>
> >>>> Signed-off-by: Like Xu <likexu@...cent.com>
> >>>> ---
> >>>> arch/x86/kvm/pmu.c | 6 +++++
> >>>> arch/x86/kvm/svm/pmu.c | 50 +++++++++++++++++++++++++++++++++---------
> >>>> arch/x86/kvm/x86.c | 11 ++++++++++
> >>>> 3 files changed, 57 insertions(+), 10 deletions(-)
> >>>>
> >>>> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> >>>> index 7002e1b74108..56b4f898a246 100644
> >>>> --- a/arch/x86/kvm/pmu.c
> >>>> +++ b/arch/x86/kvm/pmu.c
> >>>> @@ -455,12 +455,15 @@ int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >>>>
> >>>> switch (msr) {
> >>>> case MSR_CORE_PERF_GLOBAL_STATUS:
> >>>> + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS:
> >>>> msr_info->data = pmu->global_status;
> >>>> return 0;
> >>>> case MSR_CORE_PERF_GLOBAL_CTRL:
> >>>> + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL:
> >>>> msr_info->data = pmu->global_ctrl;
> >>>> return 0;
> >>>> case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
> >>>> + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR:
> >>>> msr_info->data = 0;
> >>>> return 0;
> >>>> default:
> >>>> @@ -479,12 +482,14 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >>>>
> >>>> switch (msr) {
> >>>> case MSR_CORE_PERF_GLOBAL_STATUS:
> >>>> + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS:
> >>>> if (msr_info->host_initiated) {
> >>>> pmu->global_status = data;
> >>>> return 0;
> >>>> }
> >>>> break; /* RO MSR */
> >>>> case MSR_CORE_PERF_GLOBAL_CTRL:
> >>>> + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL:
> >>>> if (pmu->global_ctrl == data)
> >>>> return 0;
> >>>> if (kvm_valid_perf_global_ctrl(pmu, data)) {
> >>>> @@ -495,6 +500,7 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >>>> }
> >>>> break;
> >>>> case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
> >>>> + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR:
> >>>> if (!(data & pmu->global_ovf_ctrl_mask)) {
> >>>> if (!msr_info->host_initiated)
> >>>> pmu->global_status &= ~data;
> >>>> diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
> >>>> index 3a20972e9f1a..4c7d408e3caa 100644
> >>>> --- a/arch/x86/kvm/svm/pmu.c
> >>>> +++ b/arch/x86/kvm/svm/pmu.c
> >>>> @@ -92,12 +92,6 @@ static struct kvm_pmc *amd_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu,
> >>>> return amd_pmc_idx_to_pmc(vcpu_to_pmu(vcpu), idx & ~(3u << 30));
> >>>> }
> >>>>
> >>>> -static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
> >>>> -{
> >>>> - /* All MSRs refer to exactly one PMC, so msr_idx_to_pmc is enough. */
> >>>> - return false;
> >>>> -}
> >>>> -
> >>>> static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, u32 msr)
> >>>> {
> >>>> struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> >>>> @@ -109,6 +103,29 @@ static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, u32 msr)
> >>>> return pmc;
> >>>> }
> >>>>
> >>>> +static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
> >>>> +{
> >>>> + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> >>>> +
> >>>> + switch (msr) {
> >>>> + case MSR_K7_EVNTSEL0 ... MSR_K7_PERFCTR3:
> >>>> + return pmu->version > 0;
> >>>> + case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5:
> >>>> + return guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE);
> >>>> + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS:
> >>>> + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL:
> >>>> + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR:
> >>>> + return pmu->version > 1;
> >>>> + default:
> >>>> + if (msr > MSR_F15H_PERF_CTR5 &&
> >>>> + msr < MSR_F15H_PERF_CTL0 + 2 * KVM_AMD_PMC_MAX_GENERIC)
> >>>> + return pmu->version > 1;
> >>>
> >>> Should this be bounded by guest CPUID.80000022H:EBX[NumCorePmc]
> >>> (unless host-initiated)?
> >>
> >> Indeed, how about:
> >>
> >> default:
> >> if (msr > MSR_F15H_PERF_CTR5 &&
> >> msr < MSR_F15H_PERF_CTL0 + 2 * pmu->nr_arch_gp_counters)
> >> return pmu->version > 1;
> >>
> >> and for host-initiated:
> >>
> >> #define MSR_F15H_PERF_MSR_MAX \
> >> (MSR_F15H_PERF_CTR0 + 2 * (KVM_AMD_PMC_MAX_GENERIC - 1))
> >
> > I think there may be an off-by-one error here.
>
> If KVM_AMD_PMC_MAX_GENERIC is 6:
>
> #define MSR_F15H_PERF_CTL 0xc0010200
> #define MSR_F15H_PERF_CTL5 (MSR_F15H_PERF_CTL + 10)
>
> #define MSR_F15H_PERF_CTR 0xc0010201
> #define MSR_F15H_PERF_CTR0 MSR_F15H_PERF_CTR
> #define MSR_F15H_PERF_CTR5 (MSR_F15H_PERF_CTR + 10)
>
> >
> >>
> >> kvm_{set|get}_msr_common()
> >> case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_MSR_MAX:
>
> the original code is "case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5:",
>
> in that case, MSR_F15H_PERF_MSR_MAX make sense, right ?
Right. I was misreading the definition.
Powered by blists - more mailing lists