[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <pw3at4bjmtxa2xdwheqn5a33ahpudqhl2urnbrsuqn7yaa5l5m@5yddcnjwevnd>
Date: Sat, 8 Nov 2025 02:29:28 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Jim Mattson <jmattson@...gle.com>,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 03/11] KVM: nSVM: Add missing consistency check for
event_inj
On Wed, Nov 05, 2025 at 05:17:38PM -0800, Sean Christopherson wrote:
> On Wed, Nov 05, 2025, Yosry Ahmed wrote:
> > On Wed, Nov 05, 2025 at 10:48:28AM -0800, Sean Christopherson wrote:
> > > On Tue, Nov 04, 2025, Yosry Ahmed wrote:
> > > > According to the APM Volume #2, 15.20 (24593—Rev. 3.42—March 2024):
> > > >
> > > > VMRUN exits with VMEXIT_INVALID error code if either:
> > > > • Reserved values of TYPE have been specified, or
> > > > • TYPE = 3 (exception) has been specified with a vector that does not
> > > > correspond to an exception (this includes vector 2, which is an NMI,
> > > > not an exception).
> > > >
> > > > Add the missing consistency checks to KVM. For the second point, inject
> > > > VMEXIT_INVALID if the vector is anything but the vectors defined by the
> > > > APM for exceptions. Reserved vectors are also considered invalid, which
> > > > matches the HW behavior.
> > >
> > > Ugh. Strictly speaking, that means KVM needs to match the capabilities of the
> > > virtual CPU. E.g. if the virtual CPU predates SEV-ES, then #VC should be reserved
> > > from the guest's perspective.
> > >
> > > > Vector 9 (i.e. #CSO) is considered invalid because it is reserved on modern
> > > > CPUs, and according to LLMs no CPUs exist supporting SVM and producing #CSOs.
> > > >
> > > > Signed-off-by: Yosry Ahmed <yosry.ahmed@...ux.dev>
> > > > ---
> > > > arch/x86/include/asm/svm.h | 5 +++++
> > > > arch/x86/kvm/svm/nested.c | 33 +++++++++++++++++++++++++++++++++
> > > > 2 files changed, 38 insertions(+)
> > > >
> > > > diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
> > > > index e69b6d0dedcf0..3a9441a8954f3 100644
> > > > --- a/arch/x86/include/asm/svm.h
> > > > +++ b/arch/x86/include/asm/svm.h
> > > > @@ -633,6 +633,11 @@ static inline void __unused_size_checks(void)
> > > > #define SVM_EVTINJ_VALID (1 << 31)
> > > > #define SVM_EVTINJ_VALID_ERR (1 << 11)
> > > >
> > > > +/* Only valid exceptions (and not NMIs) are allowed for SVM_EVTINJ_TYPE_EXEPT */
> > > > +#define SVM_EVNTINJ_INVALID_EXEPTS (NMI_VECTOR | BIT_ULL(9) | BIT_ULL(15) | \
> > > > + BIT_ULL(20) | GENMASK_ULL(27, 22) | \
> > > > + BIT_ULL(31))
> > >
> > > As above, hardcoding this won't work. E.g. if a VM is migrated from a CPU where
> > > vector X is reserved to a CPU where vector X is valid, then the VM will observe
> > > a change in behavior.
> > >
> > > Even if we're ok being overly permissive today (e.g. by taking an erratum), this
> > > will create problems in the future when one of the reserved vectors is defined,
> > > at which point we'll end up changing guest-visible behavior (and will have to
> > > take another erratum, or maybe define the erratum to be that KVM straight up
> > > doesn't enforce this correctly?)
> > >
> > > And if we do throw in the towel and don't try to enforce this, we'll still want
> > > a safeguard against this becoming stale, e.g. when KVM adds support for new
> > > feature XYZ that comes with a new vector.
> > >
> > > Off the cuff, the best idea I have is to define the positive set of vectors
> > > somewhere common with a static assert, and then invert that. E.g. maybe something
> > > shared with kvm_trace_sym_exc()?
> >
> > Do you mean define the positive set of vectors dynamically based on the
> > vCPU caps? Like a helper returning a dynamic bitmask instead of
> > SVM_EVNTINJ_INVALID_EXEPTS?
>
> Ya, that would be option #1, though I'm not entirely sure it's a good option.
> The validity of vectors aren't architecturally tied to the existince of any
> particular feature, at least not explicitly. For the "newer" vectors, i.e. the
> ones that we can treat as conditionally valid, it's pretty obvious which features
> they "belong" to, but even then I hesitate to draw connections, e.g. on the off
> chance that some weird hypervisor checks Family/Model/Stepping or something.
>
> > If we'll reuse that for kvm_trace_sym_exc() it will need more work, but
> > I don't see why we need a dynamic list for kvm_trace_sym_exc().
>
> Sorry, this is for option #2. Hardcode the set of vectors that KVM allows (to
> prevent L1 from throwing pure garbage at hardware), but otherwise defer to the
> CPU to enforce the reserved vectors.
>
> Hrm, but option #2 just delays the inevitable, e.g. we'll be stuck once again
> when KVM supports some new vector, in which case we'll have to change guest
> visible behavior _again_, or bite the bullet and do option #1.
>
> So I guess do option #1 straight away and hope nothing breaks? Maybe hardcode
> everything as supported except #CP (SHSTK) and #VC (SEV-ES)?
Ack, will do something like:
#define ALWAYS_VALID_EVTINJ_EXEPTS (DE_VECTOR | DB_VECTOR | BP_VECTOR | \
OF_VECTOR | BR_VECTOR | UD_VECTOR | \
NM_VECTOR | DF_VECTOR | TS_VECTOR | \
NP_VECTOR | SS_VECTOR | GP_VECTOR | \
PF_VECTOR | MF_VECTOR | AC_VECTOR | \
MC_VECTOR | XM_VECTOR | VE_VECTOR | \
HV_VECTOR | SX_VECTOR)
static bool nested_svm_event_inj_valid_exept(struct kvm_vcpu *vcpu, u8 vector)
{
return ((1 << vector) & ALWAYS_VALID_EVTINJ_EXEPTS) ||
(vector == CP_VECTOR && guest_cpu_cap_has(X86_FEATURE_SHSTK)) ||
(vector == VC_VECTOR && guest_cpu_cap_has(X86_FEATURE_SEV_ES));
}
I will rebase this on top of the LBRV fixes I just sent out and include
this change and send a v2 early next week.
>
> > So my best guess is that I didn't really understand your suggestion :)
Powered by blists - more mailing lists