[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJhGHyAcnwkCfTcnxXcgAHnF=wPbH2EDp7H+e74ce+oNOWJ=_Q@mail.gmail.com>
Date: Tue, 13 Apr 2021 19:03:17 +0800
From: Lai Jiangshan <jiangshanlai+lkml@...il.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, kvm@...r.kernel.org,
Filippo Sironi <sironi@...zon.de>,
David Woodhouse <dwmw@...zon.co.uk>,
"v4.7+" <stable@...r.kernel.org>,
Wanpeng Li <wanpengli@...cent.com>
Subject: Re: [PATCH 2/2] KVM: x86: Fix split-irqchip vs interrupt injection
window request
On Tue, Apr 13, 2021 at 5:43 AM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Fri, Apr 09, 2021, Lai Jiangshan wrote:
> > On Fri, Nov 27, 2020 at 7:26 PM Paolo Bonzini <pbonzini@...hat.com> wrote:
> > >
> > > kvm_cpu_accept_dm_intr and kvm_vcpu_ready_for_interrupt_injection are
> > > a hodge-podge of conditions, hacked together to get something that
> > > more or less works. But what is actually needed is much simpler;
> > > in both cases the fundamental question is, do we have a place to stash
> > > an interrupt if userspace does KVM_INTERRUPT?
> > >
> > > In userspace irqchip mode, that is !vcpu->arch.interrupt.injected.
> > > Currently kvm_event_needs_reinjection(vcpu) covers it, but it is
> > > unnecessarily restrictive.
> > >
> > > In split irqchip mode it's a bit more complicated, we need to check
> > > kvm_apic_accept_pic_intr(vcpu) (the IRQ window exit is basically an INTACK
> > > cycle and thus requires ExtINTs not to be masked) as well as
> > > !pending_userspace_extint(vcpu). However, there is no need to
> > > check kvm_event_needs_reinjection(vcpu), since split irqchip keeps
> > > pending ExtINT state separate from event injection state, and checking
> > > kvm_cpu_has_interrupt(vcpu) is wrong too since ExtINT has higher
> > > priority than APIC interrupts. In fact the latter fixes a bug:
> > > when userspace requests an IRQ window vmexit, an interrupt in the
> > > local APIC can cause kvm_cpu_has_interrupt() to be true and thus
> > > kvm_vcpu_ready_for_interrupt_injection() to return false. When this
> > > happens, vcpu_run does not exit to userspace but the interrupt window
> > > vmexits keep occurring. The VM loops without any hope of making progress.
> > >
> > > Once we try to fix these with something like
> > >
> > > return kvm_arch_interrupt_allowed(vcpu) &&
> > > - !kvm_cpu_has_interrupt(vcpu) &&
> > > - !kvm_event_needs_reinjection(vcpu) &&
> > > - kvm_cpu_accept_dm_intr(vcpu);
> > > + (!lapic_in_kernel(vcpu)
> > > + ? !vcpu->arch.interrupt.injected
> > > + : (kvm_apic_accept_pic_intr(vcpu)
> > > + && !pending_userspace_extint(v)));
> > >
> > > we realize two things. First, thanks to the previous patch the complex
> > > conditional can reuse !kvm_cpu_has_extint(vcpu). Second, the interrupt
> > > window request in vcpu_enter_guest()
> > >
> > > bool req_int_win =
> > > dm_request_for_irq_injection(vcpu) &&
> > > kvm_cpu_accept_dm_intr(vcpu);
> > >
> > > should be kept in sync with kvm_vcpu_ready_for_interrupt_injection():
> > > it is unnecessary to ask the processor for an interrupt window
> > > if we would not be able to return to userspace. Therefore, the
> > > complex conditional is really the correct implementation of
> > > kvm_cpu_accept_dm_intr(vcpu). It all makes sense:
> > >
> > > - we can accept an interrupt from userspace if there is a place
> > > to stash it (and, for irqchip split, ExtINTs are not masked).
> > > Interrupts from userspace _can_ be accepted even if right now
> > > EFLAGS.IF=0.
> >
> > Hello, Paolo
> >
> > If userspace does KVM_INTERRUPT, vcpu->arch.interrupt.injected is
> > set immediately, and in inject_pending_event(), we have
> >
> > else if (!vcpu->arch.exception.pending) {
> > if (vcpu->arch.nmi_injected) {
> > kvm_x86_ops.set_nmi(vcpu);
> > can_inject = false;
> > } else if (vcpu->arch.interrupt.injected) {
> > kvm_x86_ops.set_irq(vcpu);
> > can_inject = false;
> > }
> > }
> >
> > I'm curious about that can the kvm_x86_ops.set_irq() here be possible
> > to queue the irq with EFLAGS.IF=0? If not, which code prevents it?
>
> The interrupt is only directly injected if the local APIC is _not_ in-kernel.
> If userspace is managing the local APIC, my understanding is that userspace is
> also responsible for honoring EFLAGS.IF, though KVM aids userspace by updating
> vcpu->run->ready_for_interrupt_injection when exiting to userspace. When
> userspace is modeling the local APIC, that resolves to
> kvm_vcpu_ready_for_interrupt_injection():
>
> return kvm_arch_interrupt_allowed(vcpu) &&
> kvm_cpu_accept_dm_intr(vcpu);
>
> where kvm_arch_interrupt_allowed() checks EFLAGS.IF (and an edge case related to
> nested virtualization). KVM also captures EFLAGS.IF in vcpu->run->if_flag.
> For whatever reason, QEMU checks both vcpu->run flags before injecting an IRQ,
> maybe to handle a case where QEMU itself clears EFLAGS.IF?
If userspace is managing the local APIC, the user VMM would insert IRQ
when kvm_run->ready_for_interrupt_injection=1 since this flags
implied EFLAGS.IF before this patch (for example gVisor checks this only
instead of kvm_run->if_flag). This patch claims that it has a place to
stash the IRQ when EFLAGS.IF=0, but inject_pending_event() seams to ignore
EFLAGS.IF and queues the IRQ to the guest directly in the first branch
of using "kvm_x86_ops.set_irq(vcpu)".
I have encountered a problem but failed to exactly dissect it with
some internal code involved.
It is somewhat possible that it has resulted from Li Wanpeng's patch
(I replied to this patch because this patch relaxes the condition even
more without reasons for how it suppresses/stashes IRQ to the guest).
When a guest APP userspace hits an exception and vmexit and returns to
the user VMM (gVisor) in conditions combined, and the user VMM wants to
queue an IRQ to it: It is now EFLAGS.IF=1 and ready_for_interrupt_injection=1
and user VMM happily queues the IRQ. In inject_pending_event(), the IRQ is
lower priority and the earlier exception is queued to the guest first. But
the IRQ can't be continuously suppressed and it is queued at the beginning
of the exception handler where EFLAGS.IF=0.
(Before Li Wanpeng's patch, ready_for_interrupt_injection=0, since
there is an exception pending)
All above is just my guess. But I want to know more clues.
And this patch says:
: we can accept an interrupt from userspace if there is a place
: to stash it (and, for irqchip split, ExtINTs are not masked).
: Interrupts from userspace _can_ be accepted even if right now
: EFLAGS.IF=0.
So it might help me for analyzing if I knew how this
behavior is achieved since inject_pending_event() doesn't
check EFLAGS.IF=0 for the first using "kvm_x86_ops.set_irq(vcpu)".
Thanks
Lai.
>
> > I'm asking about this because I just noticed that interrupt can
> > be queued when exception pending, and this patch relaxed it even
> > more.
> >
> > Note: interrupt can NOT be queued when exception pending
> > until 664f8e26b00c7 ("KVM: X86: Fix loss of exception which
> > has not yet been injected") which I think is dangerous.
Powered by blists - more mailing lists