[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YAdvGkwtJf3CDxo6@google.com>
Date: Tue, 19 Jan 2021 15:45:30 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Babu Moger <babu.moger@....com>
Cc: pbonzini@...hat.com, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, fenghua.yu@...el.com, tony.luck@...el.com,
wanpengli@...cent.com, kvm@...r.kernel.org,
thomas.lendacky@....com, peterz@...radead.org, joro@...tes.org,
x86@...nel.org, kyung.min.park@...el.com,
linux-kernel@...r.kernel.org, krish.sadhukhan@...cle.com,
hpa@...or.com, mgross@...ux.intel.com, vkuznets@...hat.com,
kim.phillips@....com, wei.huang2@....com, jmattson@...gle.com
Subject: Re: [PATCH v3 2/2] KVM: SVM: Add support for Virtual SPEC_CTRL
On Tue, Jan 19, 2021, Babu Moger wrote:
>
> On 1/19/21 12:31 PM, Sean Christopherson wrote:
> > On Fri, Jan 15, 2021, Babu Moger wrote:
> >> @@ -3789,7 +3792,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
> >> * is no need to worry about the conditional branch over the wrmsr
> >> * being speculatively taken.
> >> */
> >> - x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl);
> >> + if (static_cpu_has(X86_FEATURE_V_SPEC_CTRL))
> >> + svm->vmcb->save.spec_ctrl = svm->spec_ctrl;
> >> + else
> >> + x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl);
> >
> > Can't we avoid functional code in svm_vcpu_run() entirely when V_SPEC_CTRL is
> > supported? Make this code a nop, disable interception from time zero, and
>
> Sean, I thought you mentioned earlier about not changing the interception
> mechanism.
I assume you're referring to this comment?
On Mon, Dec 7, 2020 at 3:13 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Mon, Dec 07, 2020, Babu Moger wrote:
> > When this feature is enabled, the hypervisor no longer has to
> > intercept the usage of the SPEC_CTRL MSR and no longer is required to
> > save and restore the guest SPEC_CTRL setting when switching
> > hypervisor/guest modes.
>
> Well, it's still required if the hypervisor wanted to allow the guest to turn
> off mitigations that are enabled in the host. I'd omit this entirely and focus
> on what hardware does and how Linux/KVM utilize the new feature.
I wasn't suggesting that KVM should intercept SPEC_CTRL, I was pointing out that
there exists a scenario where a hypervisor would need/want to intercept
SPEC_CTRL, and that stating that a hypervisor is/isn't required to do something
isn't helpful in a KVM/Linux changelog because it doesn't describe the actual
change, nor does it help understand _why_ the change is correct.
> Do you think we should disable the interception right away if V_SPEC_CTRL is
> supported?
Yes, unless I'm missing an interaction somewhere, that will simplify the get/set
flows as they won't need to handle the case where the MSR is intercepted when
V_SPEC_CTRL is supported. If the MSR is conditionally passed through, the get
flow would need to check if the MSR is intercepted to determine whether
svm->spec_ctrl or svm->vmcb->save.spec_ctrl holds the guest's value.
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index cce0143a6f80..40f1bd449cfa 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2678,7 +2678,10 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
!guest_has_spec_ctrl_msr(vcpu))
return 1;
- msr_info->data = svm->spec_ctrl;
+ if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL))
+ msr_info->data = svm->vmcb->save.spec_ctrl;
+ else
+ msr_info->data = svm->spec_ctrl;
break;
case MSR_AMD64_VIRT_SPEC_CTRL:
if (!msr_info->host_initiated &&
@@ -2779,6 +2782,11 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
if (kvm_spec_ctrl_test_value(data))
return 1;
+ if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL)) {
+ svm->vmcb->save.spec_ctrl = data;
+ break;
+ }
+
svm->spec_ctrl = data;
if (!data)
break;
> > read/write the VMBC field in svm_{get,set}_msr(). I.e. don't touch
> > svm->spec_ctrl if V_SPEC_CTRL is supported.
Potentially harebrained alternative...
>From an architectural SVM perspective, what are the rules for VMCB fields that
don't exist (on the current hardware)? E.g. are they reserved MBZ? If not,
does the SVM architecture guarantee that reserved fields will not be modified?
I couldn't (quickly) find anything in the APM that explicitly states what
happens with defined-but-not-existent fields.
Specifically in the context of this change, ignoring nested correctness, what
would happen if KVM used the VMCB field even on CPUs without V_SPEC_CTRL? Would
this explode on VMRUN? Risk silent corruption? Just Work (TM)?
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index cce0143a6f80..22a6a7c35d0a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1285,7 +1285,6 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
u32 dummy;
u32 eax = 1;
- svm->spec_ctrl = 0;
svm->virt_spec_ctrl = 0;
if (!init_event) {
@@ -2678,7 +2677,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
!guest_has_spec_ctrl_msr(vcpu))
return 1;
- msr_info->data = svm->spec_ctrl;
+ msr_info->data = svm->vmcb->save.spec_ctrl;
break;
case MSR_AMD64_VIRT_SPEC_CTRL:
if (!msr_info->host_initiated &&
@@ -2779,7 +2778,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
if (kvm_spec_ctrl_test_value(data))
return 1;
- svm->spec_ctrl = data;
+ svm->vmcb->save.spec_ctrl = data;
if (!data)
break;
@@ -3791,7 +3790,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
* is no need to worry about the conditional branch over the wrmsr
* being speculatively taken.
*/
- x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl);
+ x86_spec_ctrl_set_guest(svm->vmcb->save.spec_ctrl, svm->virt_spec_ctrl);
svm_vcpu_enter_exit(vcpu, svm);
@@ -3811,12 +3810,12 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
* save it.
*/
if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
- svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
+ svm->vmcb->save.spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
if (!sev_es_guest(svm->vcpu.kvm))
reload_tss(vcpu);
- x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl);
+ x86_spec_ctrl_restore_host(svm->vmcb->save.spec_ctrl, svm->virt_spec_ctrl);
if (!sev_es_guest(svm->vcpu.kvm)) {
vcpu->arch.cr2 = svm->vmcb->save.cr2;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 5431e6335e2e..a4f9417e3b7e 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -137,7 +137,6 @@ struct vcpu_svm {
u64 gs_base;
} host;
- u64 spec_ctrl;
/*
* Contains guest-controlled bits of VIRT_SPEC_CTRL, which will be
* translated into the appropriate L2_CFG bits on the host to
Powered by blists - more mailing lists