[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZM1HODB6No0XArEq@google.com>
Date: Fri, 4 Aug 2023 11:45:12 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Yang Weijiang <weijiang.yang@...el.com>
Cc: pbonzini@...hat.com, peterz@...radead.org, john.allen@....com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
rick.p.edgecombe@...el.com, chao.gao@...el.com,
binbin.wu@...ux.intel.com
Subject: Re: [PATCH v5 05/19] KVM:x86: Initialize kvm_caps.supported_xss
On Thu, Aug 03, 2023, Yang Weijiang wrote:
> Set kvm_caps.supported_xss to host_xss && KVM XSS mask.
> host_xss contains the host supported xstate feature bits for thread
> context switch, KVM_SUPPORTED_XSS includes all KVM enabled XSS feature
> bits, the operation result represents all KVM supported feature bits.
> Since the result is subset of host_xss, the related XSAVE-managed MSRs
> are automatically swapped for guest and host when vCPU exits to
> userspace.
>
> Signed-off-by: Yang Weijiang <weijiang.yang@...el.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 1 -
> arch/x86/kvm/x86.c | 6 +++++-
> 2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 0ecf4be2c6af..c8d9870cfecb 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7849,7 +7849,6 @@ static __init void vmx_set_cpu_caps(void)
> kvm_cpu_cap_set(X86_FEATURE_UMIP);
>
> /* CPUID 0xD.1 */
> - kvm_caps.supported_xss = 0;
Dropping this code in *this* patch is wrong, this belong in whatever patch(es) adds
IBT and SHSTK support in VMX.
And that does matter because it means this common patch can be carried wih SVM
support without breaking VMX.
> if (!cpu_has_vmx_xsaves())
> kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 5d6d6fa33e5b..e9f3627d5fdd 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -225,6 +225,8 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs;
> | XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
> | XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE)
>
> +#define KVM_SUPPORTED_XSS 0
> +
> u64 __read_mostly host_efer;
> EXPORT_SYMBOL_GPL(host_efer);
>
> @@ -9498,8 +9500,10 @@ static int __kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
>
> rdmsrl_safe(MSR_EFER, &host_efer);
>
> - if (boot_cpu_has(X86_FEATURE_XSAVES))
> + if (boot_cpu_has(X86_FEATURE_XSAVES)) {
> rdmsrl(MSR_IA32_XSS, host_xss);
> + kvm_caps.supported_xss = host_xss & KVM_SUPPORTED_XSS;
> + }
Can you opportunistically (in this patch) hoist this above EFER so that XCR0 and
XSS are colocated? I.e. end up with this:
if (boot_cpu_has(X86_FEATURE_XSAVE)) {
host_xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK);
kvm_caps.supported_xcr0 = host_xcr0 & KVM_SUPPORTED_XCR0;
}
if (boot_cpu_has(X86_FEATURE_XSAVES)) {
rdmsrl(MSR_IA32_XSS, host_xss);
kvm_caps.supported_xss = host_xss & KVM_SUPPORTED_XSS;
}
rdmsrl_safe(MSR_EFER, &host_efer);
Powered by blists - more mailing lists