[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aKShs0btGwLtYlVc@google.com>
Date: Tue, 19 Aug 2025 09:09:23 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Chao Gao <chao.gao@...el.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, mlevitsk@...hat.com,
rick.p.edgecombe@...el.com, weijiang.yang@...el.com, xin@...or.com,
Mathias Krause <minipli@...ecurity.net>, John Allen <john.allen@....com>,
Paolo Bonzini <pbonzini@...hat.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v12 15/24] KVM: VMX: Emulate read and write to CET MSRs
On Mon, Aug 11, 2025, Chao Gao wrote:
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index b5c4db4b7e04..cc39ace47262 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1885,6 +1885,27 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data,
>
> data = (u32)data;
> break;
> + case MSR_IA32_U_CET:
> + case MSR_IA32_S_CET:
> + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK) &&
> + !guest_cpu_cap_has(vcpu, X86_FEATURE_IBT))
> + return KVM_MSR_RET_UNSUPPORTED;
> + if (!is_cet_msr_valid(vcpu, data))
> + return 1;
> + break;
> + case MSR_KVM_INTERNAL_GUEST_SSP:
> + if (!host_initiated)
> + return 1;
> + fallthrough;
> + case MSR_IA32_PL0_SSP ... MSR_IA32_INT_SSP_TAB:
> + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK))
> + return KVM_MSR_RET_UNSUPPORTED;
> + if (is_noncanonical_msr_address(data, vcpu))
This emulation is wrong (in no small part because the architecture sucks). From
the SDM:
If the processor does not support Intel 64 architecture, these fields have only
32 bits; bits 63:32 of the MSRs are reserved.
On processors that support Intel 64 architecture this value cannot represent a
non-canonical address.
In protected mode, only 31:0 are loaded.
That means KVM needs to drop bits 63:32 if the vCPU doesn't have LM or if the vCPU
isn't in 64-bit mode. The last one is especially frustrating, because software
can still get a 64-bit value into the MSRs while running in protected, e.g. by
switching to 64-bit mode, doing WRMSRs, then switching back to 32-bit mode.
But, there's probably no point in actually trying to correctly emulate/virtualize
the Protected Mode behavior, because the MSRs can be written via XRSTOR, and to
close that hole KVM would need to trap-and-emulate XRSTOR. No thanks.
Unless someone has a better idea, I'm inclined to take an erratum for this, i.e.
just sweep it under the rug.
> + return 1;
> + /* All SSP MSRs except MSR_IA32_INT_SSP_TAB must be 4-byte aligned */
> + if (index != MSR_IA32_INT_SSP_TAB && !IS_ALIGNED(data, 4))
> + return 1;
> + break;
> }
>
> msr.data = data;
...
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index f8fbd33db067..d5b039addd11 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -733,4 +733,27 @@ static inline void kvm_set_xstate_msr(struct kvm_vcpu *vcpu,
> kvm_fpu_put();
> }
>
> +#define CET_US_RESERVED_BITS GENMASK(9, 6)
> +#define CET_US_SHSTK_MASK_BITS GENMASK(1, 0)
> +#define CET_US_IBT_MASK_BITS (GENMASK_ULL(5, 2) | GENMASK_ULL(63, 10))
> +#define CET_US_LEGACY_BITMAP_BASE(data) ((data) >> 12)
> +
> +static inline bool is_cet_msr_valid(struct kvm_vcpu *vcpu, u64 data)
This name is misleading, e.g. it reads "is this CET MSR valid", whereas the helper
is checking "is this value for U_CET or S_CET valid". Maybe kvm_is_valid_u_s_cet()?
> +{
> + if (data & CET_US_RESERVED_BITS)
> + return false;
> + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK) &&
> + (data & CET_US_SHSTK_MASK_BITS))
> + return false;
> + if (!guest_cpu_cap_has(vcpu, X86_FEATURE_IBT) &&
> + (data & CET_US_IBT_MASK_BITS))
> + return false;
> + if (!IS_ALIGNED(CET_US_LEGACY_BITMAP_BASE(data), 4))
> + return false;
> + /* IBT can be suppressed iff the TRACKER isn't WAIT_ENDBR. */
> + if ((data & CET_SUPPRESS) && (data & CET_WAIT_ENDBR))
> + return false;
> +
> + return true;
> +}
> #endif
> --
> 2.47.1
>
Powered by blists - more mailing lists