[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8cc2bb9a-167e-598c-6a9e-c23e943b1248@redhat.com>
Date: Fri, 23 Apr 2021 08:13:11 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Reiji Watanabe <reijiw@...gle.com>
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Tom Lendacky <thomas.lendacky@....com>
Subject: Re: [PATCH v2] KVM: SVM: Delay restoration of host MSR_TSC_AUX until
return to userspace
On 22/04/21 22:12, Sean Christopherson wrote:
> case MSR_TSC_AUX:
> if (!boot_cpu_has(X86_FEATURE_RDTSCP))
> return 1;
>
> if (!msr_info->host_initiated &&
> !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
> return 1;
>
> /*
> * TSC_AUX is usually changed only during boot and never read
> * directly. Intercept TSC_AUX instead of exposing it to the
> * guest via direct_access_msrs, and switch it via user return.
> */
> preempt_disable();
> r = kvm_set_user_return_msr(TSC_AUX_URET_SLOT, data, -1ull);
> preempt_enable();
> if (r)
> return 1;
>
> /*
> * Bits 63:32 are dropped by AMD CPUs, but are reserved on
> * Intel CPUs. AMD's APM has incomplete and conflicting info
> * on the architectural behavior; emulate current hardware as
> * doing so ensures migrating from AMD to Intel won't explode.
> */
> svm->tsc_aux = (u32)data;
> break;
>
Ok, squashed in the following:
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 14ff7f0963e9..00e9680969a2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2875,16 +2875,28 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
if (!boot_cpu_has(X86_FEATURE_RDTSCP))
return 1;
+ if (!msr_info->host_initiated &&
+ !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
+ return 1;
+
/*
* TSC_AUX is usually changed only during boot and never read
* directly. Intercept TSC_AUX instead of exposing it to the
* guest via direct_access_msrs, and switch it via user return.
*/
- svm->tsc_aux = data;
-
preempt_disable();
- kvm_set_user_return_msr(TSC_AUX_URET_SLOT, data, -1ull);
+ r = kvm_set_user_return_msr(TSC_AUX_URET_SLOT, data, -1ull);
preempt_enable();
+ if (r)
+ return 1;
+
+ /*
+ * Bits 63:32 are dropped by AMD CPUs, but are reserved on
+ * Intel CPUs. AMD's APM has incomplete and conflicting info
+ * on the architectural behavior; emulate current hardware as
+ * doing so ensures migrating from AMD to Intel won't explode.
+ */
+ svm->tsc_aux = (u32)data;
break;
case MSR_IA32_DEBUGCTLMSR:
if (!boot_cpu_has(X86_FEATURE_LBRV)) {
Paolo
Powered by blists - more mailing lists