[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260129063653.3553076-6-shivansh.dhiman@amd.com>
Date: Thu, 29 Jan 2026 06:36:51 +0000
From: Shivansh Dhiman <shivansh.dhiman@....com>
To: <seanjc@...gle.com>, <pbonzini@...hat.com>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>
CC: <tglx@...utronix.de>, <mingo@...hat.com>, <bp@...en8.de>,
<dave.hansen@...ux.intel.com>, <x86@...nel.org>, <hpa@...or.com>,
<xin@...or.com>, <nikunj.dadhania@....com>, <santosh.shukla@....com>
Subject: [PATCH 5/7] KVM: SVM: Support FRED nested exception injection
From: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
Set the SVM nested exception bit in EVENT_INJECTION_CTL when
injecting a nested exception using FRED event delivery to
ensure:
1) A nested exception is injected on a correct stack level.
2) The nested bit defined in FRED stack frame is set.
The event stack level used by FRED event delivery depends on whether
the event was a nested exception encountered during delivery of an
earlier event, because a nested exception is "regarded" as happening
on ring 0. E.g., when #PF is configured to use stack level 1 in
IA32_FRED_STKLVLS MSR:
- nested #PF will be delivered on the stack pointed by FRED_RSP1
MSR when encountered in ring 3 and ring 0.
- normal #PF will be delivered on the stack pointed by FRED_RSP0
MSR when encountered in ring 3.
The SVM nested-exception support ensures a correct event stack level is
chosen when a VM entry injects a nested exception.
Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@....com>
Co-developed-by: Shivansh Dhiman <shivansh.dhiman@....com>
Signed-off-by: Shivansh Dhiman <shivansh.dhiman@....com>
Reviewed-by: Nikunj A Dadhania <nikunj@....com>
---
arch/x86/include/asm/svm.h | 1 +
arch/x86/kvm/svm/svm.c | 5 ++++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index c2f3e03e1f4b..f4a9781c1d6c 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -657,6 +657,7 @@ static inline void __unused_size_checks(void)
#define SVM_EVTINJ_VALID (1 << 31)
#define SVM_EVTINJ_VALID_ERR (1 << 11)
+#define SVM_EVTINJ_NESTED_EXCEPTION (1 << 13)
#define SVM_EXITINTINFO_VEC_MASK SVM_EVTINJ_VEC_MASK
#define SVM_EXITINTINFO_TYPE_MASK SVM_EVTINJ_TYPE_MASK
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 693b46d715b4..374589784206 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -363,6 +363,7 @@ static void svm_inject_exception(struct kvm_vcpu *vcpu)
{
struct kvm_queued_exception *ex = &vcpu->arch.exception;
struct vcpu_svm *svm = to_svm(vcpu);
+ bool nested = is_fred_enabled(vcpu) && ex->nested;
kvm_deliver_exception_payload(vcpu, ex);
@@ -373,6 +374,7 @@ static void svm_inject_exception(struct kvm_vcpu *vcpu)
svm->vmcb->control.event_inj = ex->vector
| SVM_EVTINJ_VALID
| (ex->has_error_code ? SVM_EVTINJ_VALID_ERR : 0)
+ | (nested ? SVM_EVTINJ_NESTED_EXCEPTION : 0)
| SVM_EVTINJ_TYPE_EXEPT;
if (is_fred_enabled(vcpu))
@@ -4137,7 +4139,8 @@ static void svm_complete_interrupts(struct kvm_vcpu *vcpu, bool reinject_on_vmex
kvm_requeue_exception(vcpu, vector,
exitintinfo & SVM_EXITINTINFO_VALID_ERR,
- error_code, false, event_data);
+ error_code, exitintinfo & SVM_EVTINJ_NESTED_EXCEPTION,
+ event_data);
break;
}
case SVM_EXITINTINFO_TYPE_INTR:
--
2.43.0
Powered by blists - more mailing lists