[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260129063653.3553076-3-shivansh.dhiman@amd.com>
Date: Thu, 29 Jan 2026 06:36:48 +0000
From: Shivansh Dhiman <shivansh.dhiman@....com>
To: <seanjc@...gle.com>, <pbonzini@...hat.com>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>
CC: <tglx@...utronix.de>, <mingo@...hat.com>, <bp@...en8.de>,
<dave.hansen@...ux.intel.com>, <x86@...nel.org>, <hpa@...or.com>,
<xin@...or.com>, <nikunj.dadhania@....com>, <santosh.shukla@....com>
Subject: [PATCH 2/7] KVM: SVM: Disable interception of FRED MSRs for FRED supported guests
From: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
The FRED (Flexible Return and Event Delivery) feature introduces a new set
of MSRs for managing its state, such as MSR_IA32_FRED_CONFIG and the
various stack pointer MSRs.
For a guest that has FRED enabled via its CPUID bits, the guest OS
expects to be able to directly read and write these MSRs. Intercepting
these accesses would cause unnecessary VM-Exits and performance overhead.
In addition, the state of the MSRs at any point should always correspond
to the context (host or guest) which is running. Otherwise, the event
delivery could refer to wrong MSR values.
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
Co-developed-by: Shivansh Dhiman <shivansh.dhiman@....com>
Signed-off-by: Shivansh Dhiman <shivansh.dhiman@....com>
---
arch/x86/include/asm/svm.h | 1 +
arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++
2 files changed, 19 insertions(+)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index a42ed39aa8fb..c2f3e03e1f4b 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -224,6 +224,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define LBR_CTL_ENABLE_MASK BIT_ULL(0)
#define VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK BIT_ULL(1)
+#define FRED_VIRT_ENABLE_MASK BIT_ULL(4)
#define SVM_INTERRUPT_SHADOW_MASK BIT_ULL(0)
#define SVM_GUEST_INTERRUPT_MASK BIT_ULL(1)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 5cec971a1f5a..05e44e804aba 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -727,6 +727,22 @@ void svm_vcpu_free_msrpm(void *msrpm)
__free_pages(virt_to_page(msrpm), get_order(MSRPM_SIZE));
}
+static void svm_recalc_fred_msr_intercepts(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ bool fred_enabled = svm->vmcb->control.virt_ext & FRED_VIRT_ENABLE_MASK;
+
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP0, MSR_TYPE_RW, !fred_enabled);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP1, MSR_TYPE_RW, !fred_enabled);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP2, MSR_TYPE_RW, !fred_enabled);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_FRED_RSP3, MSR_TYPE_RW, !fred_enabled);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_FRED_STKLVLS, MSR_TYPE_RW, !fred_enabled);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_FRED_SSP1, MSR_TYPE_RW, !fred_enabled);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_FRED_SSP2, MSR_TYPE_RW, !fred_enabled);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_FRED_SSP3, MSR_TYPE_RW, !fred_enabled);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_FRED_CONFIG, MSR_TYPE_RW, !fred_enabled);
+}
+
static void svm_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -795,6 +811,8 @@ static void svm_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
if (sev_es_guest(vcpu->kvm))
sev_es_recalc_msr_intercepts(vcpu);
+ svm_recalc_fred_msr_intercepts(vcpu);
+
/*
* x2APIC intercepts are modified on-demand and cannot be filtered by
* userspace.
--
2.43.0
Powered by blists - more mailing lists