[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <heahqrdiujkusb42hir3qbejwnc6svspt3owwtat345myquny4@5ebkzc6mt2y3>
Date: Wed, 5 Nov 2025 19:29:41 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Jim Mattson <jmattson@...gle.com>,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 03/11] KVM: nSVM: Add missing consistency check for
event_inj
On Wed, Nov 05, 2025 at 10:48:28AM -0800, Sean Christopherson wrote:
> On Tue, Nov 04, 2025, Yosry Ahmed wrote:
> > According to the APM Volume #2, 15.20 (24593—Rev. 3.42—March 2024):
> >
> > VMRUN exits with VMEXIT_INVALID error code if either:
> > • Reserved values of TYPE have been specified, or
> > • TYPE = 3 (exception) has been specified with a vector that does not
> > correspond to an exception (this includes vector 2, which is an NMI,
> > not an exception).
> >
> > Add the missing consistency checks to KVM. For the second point, inject
> > VMEXIT_INVALID if the vector is anything but the vectors defined by the
> > APM for exceptions. Reserved vectors are also considered invalid, which
> > matches the HW behavior.
>
> Ugh. Strictly speaking, that means KVM needs to match the capabilities of the
> virtual CPU. E.g. if the virtual CPU predates SEV-ES, then #VC should be reserved
> from the guest's perspective.
>
> > Vector 9 (i.e. #CSO) is considered invalid because it is reserved on modern
> > CPUs, and according to LLMs no CPUs exist supporting SVM and producing #CSOs.
> >
> > Signed-off-by: Yosry Ahmed <yosry.ahmed@...ux.dev>
> > ---
> > arch/x86/include/asm/svm.h | 5 +++++
> > arch/x86/kvm/svm/nested.c | 33 +++++++++++++++++++++++++++++++++
> > 2 files changed, 38 insertions(+)
> >
> > diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
> > index e69b6d0dedcf0..3a9441a8954f3 100644
> > --- a/arch/x86/include/asm/svm.h
> > +++ b/arch/x86/include/asm/svm.h
> > @@ -633,6 +633,11 @@ static inline void __unused_size_checks(void)
> > #define SVM_EVTINJ_VALID (1 << 31)
> > #define SVM_EVTINJ_VALID_ERR (1 << 11)
> >
> > +/* Only valid exceptions (and not NMIs) are allowed for SVM_EVTINJ_TYPE_EXEPT */
> > +#define SVM_EVNTINJ_INVALID_EXEPTS (NMI_VECTOR | BIT_ULL(9) | BIT_ULL(15) | \
> > + BIT_ULL(20) | GENMASK_ULL(27, 22) | \
> > + BIT_ULL(31))
>
> As above, hardcoding this won't work. E.g. if a VM is migrated from a CPU where
> vector X is reserved to a CPU where vector X is valid, then the VM will observe
> a change in behavior.
>
> Even if we're ok being overly permissive today (e.g. by taking an erratum), this
> will create problems in the future when one of the reserved vectors is defined,
> at which point we'll end up changing guest-visible behavior (and will have to
> take another erratum, or maybe define the erratum to be that KVM straight up
> doesn't enforce this correctly?)
>
> And if we do throw in the towel and don't try to enforce this, we'll still want
> a safeguard against this becoming stale, e.g. when KVM adds support for new
> feature XYZ that comes with a new vector.
>
> Off the cuff, the best idea I have is to define the positive set of vectors
> somewhere common with a static assert, and then invert that. E.g. maybe something
> shared with kvm_trace_sym_exc()?
Do you mean define the positive set of vectors dynamically based on the
vCPU caps? Like a helper returning a dynamic bitmask instead of
SVM_EVNTINJ_INVALID_EXEPTS?
If we'll reuse that for kvm_trace_sym_exc() it will need more work, but
I don't see why we need a dynamic list for kvm_trace_sym_exc().
So my best guess is that I didn't really understand your suggestion :)
Powered by blists - more mailing lists