[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALMp9eTOV3Twep9gL-9S+Pe_k-=v17CcJTLb5=+7_pjvWf9RfQ@mail.gmail.com>
Date: Wed, 2 Jan 2019 15:40:58 -0800
From: Jim Mattson <jmattson@...gle.com>
To: Wei Wang <wei.w.wang@...el.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
kvm list <kvm@...r.kernel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Andi Kleen <ak@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Kan Liang <kan.liang@...el.com>,
Ingo Molnar <mingo@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
like.xu@...el.com, Jann Horn <jannh@...gle.com>,
arei.gonglei@...wei.com
Subject: Re: [PATCH v4 05/10] KVM/x86: expose MSR_IA32_PERF_CAPABILITIES to
the guest
On Wed, Dec 26, 2018 at 2:01 AM Wei Wang <wei.w.wang@...el.com> wrote:
>
> Bits [0, 5] of MSR_IA32_PERF_CAPABILITIES tell about the format of
> the addresses stored in the LBR stack. Expose those bits to the guest
> when the guest lbr feature is enabled.
>
> Signed-off-by: Wei Wang <wei.w.wang@...el.com>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Andi Kleen <ak@...ux.intel.com>
> ---
> arch/x86/include/asm/perf_event.h | 2 ++
> arch/x86/kvm/cpuid.c | 2 +-
> arch/x86/kvm/vmx.c | 9 +++++++++
> 3 files changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> index 2f82795..eee09b7 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -87,6 +87,8 @@
> #define ARCH_PERFMON_BRANCH_MISSES_RETIRED 6
> #define ARCH_PERFMON_EVENTS_COUNT 7
>
> +#define X86_PERF_CAP_MASK_LBR_FMT 0x3f
> +
> /*
> * Intel "Architectural Performance Monitoring" CPUID
> * detection/enumeration details:
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 7bcfa61..3b8a57b 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -365,7 +365,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
> F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
> 0 /* DS-CPL, VMX, SMX, EST */ |
> 0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
> - F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
> + F(FMA) | F(CX16) | 0 /* xTPR Update*/ | F(PDCM) |
> F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
> F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
> 0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 8d5d984..ee02967 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -4161,6 +4161,13 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> return 1;
> msr_info->data = vcpu->arch.ia32_xss;
> break;
> + case MSR_IA32_PERF_CAPABILITIES:
> + if (!boot_cpu_has(X86_FEATURE_PDCM))
> + return 1;
> + msr_info->data = native_read_msr(MSR_IA32_PERF_CAPABILITIES);
Since this isn't guarded by vcpu->kvm->arch.lbr_in_guest, it breaks
backwards compatibility, doesn't it?
> + if (vcpu->kvm->arch.lbr_in_guest)
> + msr_info->data &= X86_PERF_CAP_MASK_LBR_FMT;
> + break;
> case MSR_TSC_AUX:
> if (!msr_info->host_initiated &&
> !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
> @@ -4343,6 +4350,8 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> else
> clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
> break;
> + case MSR_IA32_PERF_CAPABILITIES:
> + return 1; /* RO MSR */
> case MSR_TSC_AUX:
> if (!msr_info->host_initiated &&
> !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
> --
> 2.7.4
>
Powered by blists - more mailing lists