[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5acc3ac2aec4b98f9211ca3f4100c358bf2f460.camel@redhat.com>
Date: Thu, 21 Jul 2022 15:05:49 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Santosh Shukla <santosh.shukla@....com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Tom Lendacky <thomas.lendacky@....com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCHv2 4/7] KVM: SVM: Report NMI not allowed when Guest busy
handling VNMI
On Wed, 2022-07-20 at 21:54 +0000, Sean Christopherson wrote:
> On Sat, Jul 09, 2022, Santosh Shukla wrote:
> > In the VNMI case, Report NMI is not allowed when the processor set the
> > V_NMI_MASK to 1 which means the Guest is busy handling VNMI.
> >
> > Signed-off-by: Santosh Shukla <santosh.shukla@....com>
> > ---
> > v2:
> > - Moved vnmi check after is_guest_mode() in func _nmi_blocked().
> > - Removed is_vnmi_mask_set check from _enable_nmi_window().
> > as it was a redundent check.
> >
> > arch/x86/kvm/svm/svm.c | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index 3574e804d757..44c1f2317b45 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -3480,6 +3480,9 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu)
> > if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm))
> > return false;
> >
> > + if (is_vnmi_enabled(svm) && is_vnmi_mask_set(svm))
> > + return true;
> > +
> > ret = (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) ||
> > (vcpu->arch.hflags & HF_NMI_MASK);
> >
> > @@ -3609,6 +3612,9 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
> > {
> > struct vcpu_svm *svm = to_svm(vcpu);
> >
> > + if (is_vnmi_enabled(svm))
> > + return;
>
> Ugh, is there really no way to trigger an exit when NMIs become unmasked? Because
> if there isn't, this is broken for KVM.
>
> On bare metal, if two NMIs arrive "simultaneously", so long as NMIs aren't blocked,
> the first NMI will be delivered and the second will be pended, i.e. software will
> see both NMIs. And if that doesn't hold true, the window for a true collision is
> really, really tiny.
>
> But in KVM, because a vCPU may not be run a long duration, that window becomes
> very large. To not drop NMIs and more faithfully emulate hardware, KVM allows two
> NMIs to be _pending_. And when that happens, KVM needs to trigger an exit when
> NMIs become unmasked _after_ the first NMI is injected.
This is how I see this:
- When a NMI arrives and neither NMI is injected (V_NMI_PENDING) nor in service (V_NMI_MASK)
then all it is needed to inject the NMI will be to set the V_NMI_PENDING bit and do VM entry.
- If V_NMI_PENDING is set but not V_NMI_MASK, and another NMI arrives we can make the
svm_nmi_allowed return -EBUSY which will cause immediate VM exit,
and if hopefully vNMI takes priority over the fake interrupt we raise, it will be injected,
and upon immediate VM exit we can inject another NMI by setting the V_NMI_PENDING again,
and later when the guest is done with first NMI, it will take the second.
Of course if we get a nested exception, then it will be fun....
(the patches don't do it (causing immediate VM exit),
but I think we should make the svm_nmi_allowed, check for the case for
V_NMI_PENDING && !V_NMI_MASK and make it return -EBUSY).
- If both V_NMI_PENDING and V_NMI_MASK are set, then I guess we lose an NMI.
(It means that the guest is handling an NMI, there is a pending NMI, and now
another NMI arrived)
Sean, this is the problem you mention, right?
Best regards,
Maxim Levitsky
>
> > +
> > if ((vcpu->arch.hflags & (HF_NMI_MASK | HF_IRET_MASK)) == HF_NMI_MASK)
> > return; /* IRET will cause a vm exit */
> >
> > --
> > 2.25.1
> >
Powered by blists - more mailing lists