[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <932e6ab3da191bd342e354ad7e4d05c835f785e9.camel@redhat.com>
Date: Thu, 14 Jan 2021 13:39:11 +0200
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Wei Huang <wei.huang2@....com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
pbonzini@...hat.com, vkuznets@...hat.com, joro@...tes.org,
bp@...en8.de, tglx@...utronix.de, mingo@...hat.com, x86@...nel.org,
jmattson@...gle.com, wanpengli@...cent.com, bsd@...hat.com,
dgilbert@...hat.com
Subject: Re: [PATCH 2/2] KVM: SVM: Add support for VMCB address check change
On Tue, 2021-01-12 at 11:18 -0800, Sean Christopherson wrote:
> On Tue, Jan 12, 2021, Wei Huang wrote:
> > New AMD CPUs have a change that checks VMEXIT intercept on special SVM
> > instructions before checking their EAX against reserved memory region.
> > This change is indicated by CPUID_0x8000000A_EDX[28]. If it is 1, KVM
> > doesn't need to intercept and emulate #GP faults for such instructions
> > because #GP isn't supposed to be triggered.
> >
> > Co-developed-by: Bandan Das <bsd@...hat.com>
> > Signed-off-by: Bandan Das <bsd@...hat.com>
> > Signed-off-by: Wei Huang <wei.huang2@....com>
> > ---
> > arch/x86/include/asm/cpufeatures.h | 1 +
> > arch/x86/kvm/svm/svm.c | 2 +-
> > 2 files changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> > index 84b887825f12..ea89d6fdd79a 100644
> > --- a/arch/x86/include/asm/cpufeatures.h
> > +++ b/arch/x86/include/asm/cpufeatures.h
> > @@ -337,6 +337,7 @@
> > #define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */
> > #define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* Virtual VMSAVE VMLOAD */
> > #define X86_FEATURE_VGIF (15*32+16) /* Virtual GIF */
> > +#define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* "" SVME addr check */
>
> Heh, KVM should advertise this to userspace by setting the kvm_cpu_cap bit. KVM
> KVM forwards relevant VM-Exits to L1 without checking if rAX points at an
> invalid L1 GPA.
I agree that we should be able to fix/hide the errata from the L1,
and expose this bit to L1 to avoid it trying to apply this workaround
itself when it itself runs nested guests.
Note that there is currently a bug in this patch series, that prevents
this workaround to work for a guest that runs nested guests itself (e.g L3):
(when we intercept the #GP, and we are running
a nested guest, we should do a vmexit with SVM_EXIT_VMRUN/VMSAVE/etc exit
reason instead of running the instruction), but this can be fixed,
I did it locally and it works.
(lightly tested) patch for that attached.
Best regards,
Maxim Levitsky
>
> > /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */
> > #define X86_FEATURE_AVX512VBMI (16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index 74620d32aa82..451b82df2eab 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -311,7 +311,7 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
> > svm->vmcb->save.efer = efer | EFER_SVME;
> > vmcb_mark_dirty(svm->vmcb, VMCB_CR);
> > /* Enable GP interception for SVM instructions if needed */
> > - if (efer & EFER_SVME)
> > + if ((efer & EFER_SVME) && !boot_cpu_has(X86_FEATURE_SVME_ADDR_CHK))
> > set_exception_intercept(svm, GP_VECTOR);
> >
> > return 0;
> > --
> > 2.27.0
> >
View attachment "patch.diff" of type "text/x-patch" (1315 bytes)
Powered by blists - more mailing lists