[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MW3PR11MB4554A01E691ED2070D98EE2CC7999@MW3PR11MB4554.namprd11.prod.outlook.com>
Date: Fri, 29 Jul 2022 20:48:36 +0000
From: "Liu, Rong L" <rong.l.liu@...el.com>
To: Dmytro Maluka <dmy@...ihalf.com>,
"Christopherson,, Sean" <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>
CC: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"x86@...nel.org" <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Eric Auger <eric.auger@...hat.com>,
Alex Williamson <alex.williamson@...hat.com>,
Zhenyu Wang <zhenyuw@...ux.intel.com>,
Tomasz Nowicki <tn@...ihalf.com>,
Grzegorz Jaszczyk <jaz@...ihalf.com>,
Dmitry Torokhov <dtor@...gle.com>
Subject: RE: [PATCH 3/3] KVM: irqfd: Postpone resamplefd notify for oneshot
interrupts
Hi Dmytro,
> -----Original Message-----
> From: Dmytro Maluka <dmy@...ihalf.com>
> Sent: Tuesday, July 26, 2022 7:08 AM
> To: Liu, Rong L <rong.l.liu@...el.com>; Christopherson,, Sean
> <seanjc@...gle.com>; Paolo Bonzini <pbonzini@...hat.com>;
> kvm@...r.kernel.org
> Cc: Thomas Gleixner <tglx@...utronix.de>; Ingo Molnar
> <mingo@...hat.com>; Borislav Petkov <bp@...en8.de>; Dave Hansen
> <dave.hansen@...ux.intel.com>; x86@...nel.org; H. Peter Anvin
> <hpa@...or.com>; linux-kernel@...r.kernel.org; Eric Auger
> <eric.auger@...hat.com>; Alex Williamson
> <alex.williamson@...hat.com>; Zhenyu Wang
> <zhenyuw@...ux.intel.com>; Tomasz Nowicki <tn@...ihalf.com>;
> Grzegorz Jaszczyk <jaz@...ihalf.com>; Dmitry Torokhov
> <dtor@...gle.com>
> Subject: Re: [PATCH 3/3] KVM: irqfd: Postpone resamplefd notify for
> oneshot interrupts
>
> Hi Rong,
>
> On 7/26/22 01:44, Liu, Rong L wrote:
> > Hi Dmytro,
> >
> >> -----Original Message-----
> >> From: Dmytro Maluka <dmy@...ihalf.com>
> >> Sent: Friday, July 15, 2022 8:59 AM
> >> To: Christopherson,, Sean <seanjc@...gle.com>; Paolo Bonzini
> >> <pbonzini@...hat.com>; kvm@...r.kernel.org
> >> Cc: Thomas Gleixner <tglx@...utronix.de>; Ingo Molnar
> >> <mingo@...hat.com>; Borislav Petkov <bp@...en8.de>; Dave Hansen
> >> <dave.hansen@...ux.intel.com>; x86@...nel.org; H. Peter Anvin
> >> <hpa@...or.com>; linux-kernel@...r.kernel.org; Eric Auger
> >> <eric.auger@...hat.com>; Alex Williamson
> >> <alex.williamson@...hat.com>; Liu, Rong L <rong.l.liu@...el.com>;
> >> Zhenyu Wang <zhenyuw@...ux.intel.com>; Tomasz Nowicki
> >> <tn@...ihalf.com>; Grzegorz Jaszczyk <jaz@...ihalf.com>; Dmitry
> >> Torokhov <dtor@...gle.com>; Dmytro Maluka <dmy@...ihalf.com>
> >> Subject: [PATCH 3/3] KVM: irqfd: Postpone resamplefd notify for
> oneshot
> >> interrupts
> >>
> >> The existing KVM mechanism for forwarding of level-triggered
> interrupts
> >> using resample eventfd doesn't work quite correctly in the case of
> >> interrupts that are handled in a Linux guest as oneshot interrupts
> >> (IRQF_ONESHOT). Such an interrupt is acked to the device in its
> >> threaded irq handler, i.e. later than it is acked to the interrupt
> >> controller (EOI at the end of hardirq), not earlier.
> >>
> >> Linux keeps such interrupt masked until its threaded handler finishes,
> >> to prevent the EOI from re-asserting an unacknowledged interrupt.
> >> However, with KVM + vfio (or whatever is listening on the resamplefd)
> >> we don't check that the interrupt is still masked in the guest at the
> >> moment of EOI. Resamplefd is notified regardless, so vfio prematurely
> >> unmasks the host physical IRQ, thus a new (unwanted) physical
> interrupt
> >> is generated in the host and queued for injection to the guest.
> >>
> >> The fact that the virtual IRQ is still masked doesn't prevent this new
> >> physical IRQ from being propagated to the guest, because:
> >>
> >> 1. It is not guaranteed that the vIRQ will remain masked by the time
> >> when vfio signals the trigger eventfd.
> >> 2. KVM marks this IRQ as pending (e.g. setting its bit in the virtual
> >> IRR register of IOAPIC on x86), so after the vIRQ is unmasked, this
> >> new pending interrupt is injected by KVM to the guest anyway.
> >>
> >> There are observed at least 2 user-visible issues caused by those
> >> extra erroneous pending interrupts for oneshot irq in the guest:
> >>
> >> 1. System suspend aborted due to a pending wakeup interrupt from
> >> ChromeOS EC (drivers/platform/chrome/cros_ec.c).
> >> 2. Annoying "invalid report id data" errors from ELAN0000 touchpad
> >> (drivers/input/mouse/elan_i2c_core.c), flooding the guest dmesg
> >> every time the touchpad is touched.
> >>
> >> This patch fixes the issue on x86 by checking if the interrupt is
> >> unmasked when we receive irq ack (EOI) and, in case if it's masked,
> >> postponing resamplefd notify until the guest unmasks it.
> >>
> >> Important notes:
> >>
> >> 1. It doesn't fix the issue for other archs yet, due to some missing
> >> KVM functionality needed by this patch:
> >> - calling mask notifiers is implemented for x86 only
> >> - irqchip ->is_masked() is implemented for x86 only
> >>
> >> 2. It introduces an additional spinlock locking in the resample notify
> >> path, since we are no longer just traversing an RCU list of irqfds
> >> but also updating the resampler state. Hopefully this locking won't
> >> noticeably slow down anything for anyone.
> >>
> >
> > Instead of using a spinlock waiting for the unmask event, is it possible
> to call
> > resampler notify directly when unmask event happens, instead of
> calling it on
> > EOI?
>
> In this patch, resampler notify is already called directly when unmask
> happens: e.g. with IOAPIC, when the guest unmasks the interrupt by
> writing to IOREDTBLx register, ioapic_write_indirect() calls
> kvm_fire_mask_notifiers() which calls irqfd_resampler_mask() which
> notifies the resampler. On EOI we postpone it just by setting
> resampler->pending to true, not by waiting. The spinlock is needed
> merely to synchronize reading & updating resampler->pending and
> resampler->masked values between possibly concurrently running
> instances
> of irqfd_resampler_ack() and/or irqfd_resampler_mask().
>
> Thanks,
> Dmytro
>
I mean the organization of the code. In current implementation,
kvm_ioapic_update_eoi_one() calls kvm_notify_acked_irq(), in your patch, why not
call kvm_notify_acked_irq() from ioapic_write_indirect() (roughly at the same
place where kvm_fire_mask_notifiers is called), instead of calling it from
kvm_ioapic_update_eoi_one, since what your intention here is to notify
vfio of the end of interrupt at the event of ioapic unmask, instead of
EOI?
> >
> >> Regarding #2, there may be an alternative solution worth considering:
> >> extend KVM irqfd (userspace) API to send mask and unmask
> notifications
> >> directly to vfio/whatever, in addition to resample notifications, to
> >> let vfio check the irq state on its own. There is already locking on
> >> vfio side (see e.g. vfio_platform_unmask()), so this way we would
> avoid
> >> introducing any additional locking. Also such mask/unmask
> notifications
> >> could be useful for other cases.
> >>
> >> Link: https://lore.kernel.org/kvm/31420943-8c5f-125c-a5ee-
> >> d2fde2700083@...ihalf.com/
> >> Suggested-by: Sean Christopherson <seanjc@...gle.com>
> >> Signed-off-by: Dmytro Maluka <dmy@...ihalf.com>
> >> ---
> >> include/linux/kvm_irqfd.h | 14 ++++++++++++
> >> virt/kvm/eventfd.c | 45
> >> +++++++++++++++++++++++++++++++++++++++
> >> 2 files changed, 59 insertions(+)
> >>
> >> diff --git a/include/linux/kvm_irqfd.h b/include/linux/kvm_irqfd.h
> >> index dac047abdba7..01754a1abb9e 100644
> >> --- a/include/linux/kvm_irqfd.h
> >> +++ b/include/linux/kvm_irqfd.h
> >> @@ -19,6 +19,16 @@
> >> * resamplefd. All resamplers on the same gsi are de-asserted
> >> * together, so we don't need to track the state of each individual
> >> * user. We can also therefore share the same irq source ID.
> >> + *
> >> + * A special case is when the interrupt is still masked at the moment
> >> + * an irq ack is received. That likely means that the interrupt has
> >> + * been acknowledged to the interrupt controller but not
> acknowledged
> >> + * to the device yet, e.g. it might be a Linux guest's threaded
> >> + * oneshot interrupt (IRQF_ONESHOT). In this case notifying through
> >> + * resamplefd is postponed until the guest unmasks the interrupt,
> >> + * which is detected through the irq mask notifier. This prevents
> >> + * erroneous extra interrupts caused by premature re-assert of an
> >> + * unacknowledged interrupt by the resamplefd listener.
> >> */
> >> struct kvm_kernel_irqfd_resampler {
> >> struct kvm *kvm;
> >> @@ -28,6 +38,10 @@ struct kvm_kernel_irqfd_resampler {
> >> */
> >> struct list_head list;
> >> struct kvm_irq_ack_notifier notifier;
> >> + struct kvm_irq_mask_notifier mask_notifier;
> >> + bool masked;
> >> + bool pending;
> >> + spinlock_t lock;
> >> /*
> >> * Entry in list of kvm->irqfd.resampler_list. Use for sharing
> >> * resamplers among irqfds on the same gsi.
> >> diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
> >> index 50ddb1d1a7f0..9ff47ac33790 100644
> >> --- a/virt/kvm/eventfd.c
> >> +++ b/virt/kvm/eventfd.c
> >> @@ -75,6 +75,44 @@ irqfd_resampler_ack(struct
> kvm_irq_ack_notifier
> >> *kian)
> >> kvm_set_irq(kvm, KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID,
> >> resampler->notifier.gsi, 0, false);
> >>
> >> + spin_lock(&resampler->lock);
> >> + if (resampler->masked) {
> >> + resampler->pending = true;
> >> + spin_unlock(&resampler->lock);
> >> + return;
> >> + }
> >> + spin_unlock(&resampler->lock);
> >> +
> >> + idx = srcu_read_lock(&kvm->irq_srcu);
> >> +
> >> + list_for_each_entry_srcu(irqfd, &resampler->list, resampler_link,
> >> + srcu_read_lock_held(&kvm->irq_srcu))
> >> + eventfd_signal(irqfd->resamplefd, 1);
> >> +
> >> + srcu_read_unlock(&kvm->irq_srcu, idx);
> >> +}
> >> +
> >> +static void
> >> +irqfd_resampler_mask(struct kvm_irq_mask_notifier *kimn, bool
> >> masked)
> >> +{
> >> + struct kvm_kernel_irqfd_resampler *resampler;
> >> + struct kvm *kvm;
> >> + struct kvm_kernel_irqfd *irqfd;
> >> + int idx;
> >> +
> >> + resampler = container_of(kimn,
> >> + struct kvm_kernel_irqfd_resampler, mask_notifier);
> >> + kvm = resampler->kvm;
> >> +
> >> + spin_lock(&resampler->lock);
> >> + resampler->masked = masked;
> >> + if (masked || !resampler->pending) {
> >> + spin_unlock(&resampler->lock);
> >> + return;
> >> + }
> >> + resampler->pending = false;
> >> + spin_unlock(&resampler->lock);
> >> +
> >> idx = srcu_read_lock(&kvm->irq_srcu);
> >>
> >> list_for_each_entry_srcu(irqfd, &resampler->list, resampler_link,
> >> @@ -98,6 +136,8 @@ irqfd_resampler_shutdown(struct
> >> kvm_kernel_irqfd *irqfd)
> >> if (list_empty(&resampler->list)) {
> >> list_del(&resampler->link);
> >> kvm_unregister_irq_ack_notifier(kvm, &resampler->notifier);
> >> + kvm_unregister_irq_mask_notifier(kvm, resampler-
> >>> mask_notifier.irq,
> >> + &resampler->mask_notifier);
> >> kvm_set_irq(kvm, KVM_IRQFD_RESAMPLE_IRQ_SOURCE_ID,
> >> resampler->notifier.gsi, 0, false);
> >> kfree(resampler);
> >> @@ -367,11 +407,16 @@ kvm_irqfd_assign(struct kvm *kvm, struct
> >> kvm_irqfd *args)
> >> INIT_LIST_HEAD(&resampler->list);
> >> resampler->notifier.gsi = irqfd->gsi;
> >> resampler->notifier.irq_acked = irqfd_resampler_ack;
> >> + resampler->mask_notifier.func = irqfd_resampler_mask;
> >> + kvm_irq_is_masked(kvm, irqfd->gsi, &resampler-
> >>> masked);
> >> + spin_lock_init(&resampler->lock);
> >> INIT_LIST_HEAD(&resampler->link);
> >>
> >> list_add(&resampler->link, &kvm->irqfds.resampler_list);
> >> kvm_register_irq_ack_notifier(kvm,
> >> &resampler->notifier);
> >> + kvm_register_irq_mask_notifier(kvm, irqfd->gsi,
> >> + &resampler->mask_notifier);
> >> irqfd->resampler = resampler;
> >> }
> >>
> >> --
> >> 2.37.0.170.g444d1eabd0-goog
> >
Powered by blists - more mailing lists