[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241210004946.3718496-10-binbin.wu@linux.intel.com>
Date: Tue, 10 Dec 2024 08:49:35 +0800
From: Binbin Wu <binbin.wu@...ux.intel.com>
To: pbonzini@...hat.com,
seanjc@...gle.com,
kvm@...r.kernel.org
Cc: rick.p.edgecombe@...el.com,
kai.huang@...el.com,
adrian.hunter@...el.com,
reinette.chatre@...el.com,
xiaoyao.li@...el.com,
tony.lindgren@...ux.intel.com,
isaku.yamahata@...el.com,
yan.y.zhao@...el.com,
chao.gao@...el.com,
linux-kernel@...r.kernel.org,
binbin.wu@...ux.intel.com
Subject: [PATCH 09/18] KVM: TDX: Enable guest access to LMCE related MSRs
From: Isaku Yamahata <isaku.yamahata@...el.com>
Allow TDX guest to configure LMCE (Local Machine Check Event) by handling
MSR IA32_FEAT_CTL and IA32_MCG_EXT_CTL.
MCE and MCA are advertised via cpuid based on the TDX module spec. Guest
kernel can access IA32_FEAT_CTL to check whether LMCE is opted-in by the
platform or not. If LMCE is opted-in by the platform, guest kernel can
access IA32_MCG_EXT_CTL to enable/disable LMCE.
Handle MSR IA32_FEAT_CTL and IA32_MCG_EXT_CTL for TDX guests to avoid
failure when a guest accesses them with TDG.VP.VMCALL<MSR> on #VE. E.g.,
Linux guest will treat the failure as a #GP(0).
Userspace VMM may not opt-in LMCE by default, e.g., QEMU disables it by
default, "-cpu lmce=on" is needed in QEMU command line to opt-in it.
Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
[binbin: rework changelog]
Signed-off-by: Binbin Wu <binbin.wu@...ux.intel.com>
---
TDX "the rest" breakout:
- Renamed from "KVM: TDX: Handle MSR IA32_FEAT_CTL MSR and IA32_MCG_EXT_CTL"
to "KVM: TDX: Enable guest access to LMCE related MSRs".
- Update changelog.
- Check reserved bits are not set when set MSR_IA32_MCG_EXT_CTL.
---
arch/x86/kvm/vmx/tdx.c | 46 +++++++++++++++++++++++++++++++++---------
1 file changed, 37 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index d5343e2ba509..b5aae9d784f7 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -2036,6 +2036,7 @@ bool tdx_has_emulated_msr(u32 index)
case MSR_MISC_FEATURES_ENABLES:
case MSR_IA32_APICBASE:
case MSR_EFER:
+ case MSR_IA32_FEAT_CTL:
case MSR_IA32_MCG_CAP:
case MSR_IA32_MCG_STATUS:
case MSR_IA32_MCG_CTL:
@@ -2068,26 +2069,53 @@ bool tdx_has_emulated_msr(u32 index)
static bool tdx_is_read_only_msr(u32 index)
{
- return index == MSR_IA32_APICBASE || index == MSR_EFER;
+ return index == MSR_IA32_APICBASE || index == MSR_EFER ||
+ index == MSR_IA32_FEAT_CTL;
}
int tdx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
{
- if (!tdx_has_emulated_msr(msr->index))
- return 1;
+ switch (msr->index) {
+ case MSR_IA32_FEAT_CTL:
+ /*
+ * MCE and MCA are advertised via cpuid. Guest kernel could
+ * check if LMCE is enabled or not.
+ */
+ msr->data = FEAT_CTL_LOCKED;
+ if (vcpu->arch.mcg_cap & MCG_LMCE_P)
+ msr->data |= FEAT_CTL_LMCE_ENABLED;
+ return 0;
+ case MSR_IA32_MCG_EXT_CTL:
+ if (!msr->host_initiated && !(vcpu->arch.mcg_cap & MCG_LMCE_P))
+ return 1;
+ msr->data = vcpu->arch.mcg_ext_ctl;
+ return 0;
+ default:
+ if (!tdx_has_emulated_msr(msr->index))
+ return 1;
- return kvm_get_msr_common(vcpu, msr);
+ return kvm_get_msr_common(vcpu, msr);
+ }
}
int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
{
- if (tdx_is_read_only_msr(msr->index))
- return 1;
+ switch (msr->index) {
+ case MSR_IA32_MCG_EXT_CTL:
+ if ((!msr->host_initiated && !(vcpu->arch.mcg_cap & MCG_LMCE_P)) ||
+ (msr->data & ~MCG_EXT_CTL_LMCE_EN))
+ return 1;
+ vcpu->arch.mcg_ext_ctl = msr->data;
+ return 0;
+ default:
+ if (tdx_is_read_only_msr(msr->index))
+ return 1;
- if (!tdx_has_emulated_msr(msr->index))
- return 1;
+ if (!tdx_has_emulated_msr(msr->index))
+ return 1;
- return kvm_set_msr_common(vcpu, msr);
+ return kvm_set_msr_common(vcpu, msr);
+ }
}
static int tdx_get_capabilities(struct kvm_tdx_cmd *cmd)
--
2.46.0
Powered by blists - more mailing lists