[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <13e34c1654b47ba5ee5e1de8fa09aabd45dbe159.camel@redhat.com>
Date: Wed, 24 Jul 2024 14:02:34 -0400
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Vitaly Kuznetsov
<vkuznets@...hat.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Hou Wenlong <houwenlong.hwl@...group.com>, Kechen Lu <kechenl@...dia.com>,
Oliver Upton <oliver.upton@...ux.dev>, Binbin Wu
<binbin.wu@...ux.intel.com>, Yang Weijiang <weijiang.yang@...el.com>,
Robert Hoo <robert.hoo.linux@...il.com>
Subject: Re: [PATCH v2 47/49] KVM: x86: Drop superfluous host XSAVE check
when adjusting guest XSAVES caps
On Tue, 2024-07-09 at 12:15 -0700, Sean Christopherson wrote:
> On Thu, Jul 04, 2024, Maxim Levitsky wrote:
> > On Fri, 2024-05-17 at 10:39 -0700, Sean Christopherson wrote:
> > > Drop the manual boot_cpu_has() checks on XSAVE when adjusting the guest's
> > > XSAVES capabilities now that guest cpu_caps incorporates KVM's support.
> > > The guest's cpu_caps are initialized from kvm_cpu_caps, which are in turn
> > > initialized from boot_cpu_data, i.e. checking guest_cpu_cap_has() also
> > > checks host/KVM capabilities (which is the entire point of cpu_caps).
> > >
> > > Cc: Maxim Levitsky <mlevitsk@...hat.com>
> > > Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> > > ---
> > > arch/x86/kvm/svm/svm.c | 1 -
> > > arch/x86/kvm/vmx/vmx.c | 3 +--
> > > 2 files changed, 1 insertion(+), 3 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > > index 06770b60c0ba..4aaffbf22531 100644
> > > --- a/arch/x86/kvm/svm/svm.c
> > > +++ b/arch/x86/kvm/svm/svm.c
> > > @@ -4340,7 +4340,6 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> > > * the guest read/write access to the host's XSS.
> > > */
> > > guest_cpu_cap_change(vcpu, X86_FEATURE_XSAVES,
> > > - boot_cpu_has(X86_FEATURE_XSAVE) &&
> > > boot_cpu_has(X86_FEATURE_XSAVES) &&
> > > guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVE));
> > >
> > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > > index 741961a1edcc..6fbdf520c58b 100644
> > > --- a/arch/x86/kvm/vmx/vmx.c
> > > +++ b/arch/x86/kvm/vmx/vmx.c
> > > @@ -7833,8 +7833,7 @@ void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> > > * to the guest. XSAVES depends on CR4.OSXSAVE, and CR4.OSXSAVE can be
> > > * set if and only if XSAVE is supported.
> > > */
> > > - if (!boot_cpu_has(X86_FEATURE_XSAVE) ||
> > > - !guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVE))
> > > + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_XSAVE))
> > > guest_cpu_cap_clear(vcpu, X86_FEATURE_XSAVES);
> >
> > Hi,
> >
> > I have a question about this code, even before the patch was applied:
> >
> > While it is obviously correct to disable XSAVES when XSAVE not supported, I
> > wonder: There are a lot more cases like that and KVM explicitly doesn't
> > bother checking them, e.g all of the AVX family also depends on XSAVE due to
> > XCR0.
> >
> > What makes XSAVES/XSAVE dependency special here? Maybe we can remove this
> > code to be consistent?
>
> Because that would result in VMX and SVM behavior diverging with respect to
> whether guest_cpu_cap_has(X86_FEATURE_XSAVES). E.g. for AMD it would be 100%
> accurate, but for Intel it would be accurate if and only if XSAVE is supported.
This is a good justification, and IMHO it is worth a comment in the VMX code,
so that this question I had won't be raised again.
> In practice that isn't truly problematic, because checks on XSAVES from common
> code are gated on guest CR4.OSXSAVE=1, i.e. implicitly check XSAVE support. But
> the potential danger of sublty divergent behavior between VMX and SVM isn't worth
> making AVX vs. XSAVES consistent within VMX, especially since VMX vs. SVM would
> still be inconsistent.
>
> > AMD portion of this patch, on the other hand does makes sense, due to a lack
> > of a separate XSAVES intercept.
>
> FWIW, AMD also needs precise tracking in order to passthrough XSS for SEV-ES.
Makes sense too.
>
Best regards,
Maxim Levitsky
Powered by blists - more mailing lists