[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZjLLNyvbpfemyN5g@google.com>
Date: Wed, 1 May 2024 16:07:35 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Yang Weijiang <weijiang.yang@...el.com>
Cc: pbonzini@...hat.com, dave.hansen@...el.com, x86@...nel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org, peterz@...radead.org,
chao.gao@...el.com, rick.p.edgecombe@...el.com, mlevitsk@...hat.com,
john.allen@....com
Subject: Re: [PATCH v10 22/27] KVM: VMX: Set up interception for CET MSRs
On Sun, Feb 18, 2024, Yang Weijiang wrote:
> @@ -7767,6 +7771,41 @@ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu)
> vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4));
> }
>
> +static void vmx_update_intercept_for_cet_msr(struct kvm_vcpu *vcpu)
> +{
> + bool incpt;
> +
> + if (kvm_cpu_cap_has(X86_FEATURE_SHSTK)) {
> + incpt = !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK);
> +
> + vmx_set_intercept_for_msr(vcpu, MSR_IA32_U_CET,
> + MSR_TYPE_RW, incpt);
> + vmx_set_intercept_for_msr(vcpu, MSR_IA32_S_CET,
> + MSR_TYPE_RW, incpt);
> + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL0_SSP,
> + MSR_TYPE_RW, incpt);
> + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL1_SSP,
> + MSR_TYPE_RW, incpt);
> + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL2_SSP,
> + MSR_TYPE_RW, incpt);
> + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL3_SSP,
> + MSR_TYPE_RW, incpt);
> + vmx_set_intercept_for_msr(vcpu, MSR_IA32_INT_SSP_TAB,
> + MSR_TYPE_RW, incpt);
> + if (!incpt)
> + return;
Hmm, I find this is unnecessarily confusing and brittle. E.g. in the unlikely
event more CET stuff comes along, this lurking return could cause problems.
Why not handle S_CET and U_CET in a single common path? IMO, this is less error
prone, and more clearly captures the relationship between S/U_CET, SHSTK, and IBT.
Updating MSR intercepts is not a hot path, so the overhead of checking guest CPUID
multiple times should be a non-issue. And eventually KVM should effectively cache
all of those lookups, i.e. the cost will be negilible.
bool incpt;
if (kvm_cpu_cap_has(X86_FEATURE_SHSTK)) {
incpt = !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL0_SSP,
MSR_TYPE_RW, incpt);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL1_SSP,
MSR_TYPE_RW, incpt);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL2_SSP,
MSR_TYPE_RW, incpt);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL3_SSP,
MSR_TYPE_RW, incpt);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_INT_SSP_TAB,
MSR_TYPE_RW, incpt);
}
if (kvm_cpu_cap_has(X86_FEATURE_SHSTK) ||
kvm_cpu_cap_has(X86_FEATURE_IBT)) {
incpt = !guest_cpuid_has(vcpu, X86_FEATURE_IBT) &&
!guest_cpuid_has(vcpu, X86_FEATURE_SHSTK);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_U_CET,
MSR_TYPE_RW, incpt);
vmx_set_intercept_for_msr(vcpu, MSR_IA32_S_CET,
MSR_TYPE_RW, incpt);
}
Powered by blists - more mailing lists