lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN9PR11MB527619B099061B3814EB40408C719@BN9PR11MB5276.namprd11.prod.outlook.com>
Date:   Fri, 10 Dec 2021 07:29:01 +0000
From:   "Tian, Kevin" <kevin.tian@...el.com>
To:     Thomas Gleixner <tglx@...utronix.de>,
        Jason Gunthorpe <jgg@...dia.com>
CC:     "Jiang, Dave" <dave.jiang@...el.com>,
        Logan Gunthorpe <logang@...tatee.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Bjorn Helgaas <helgaas@...nel.org>,
        Marc Zygnier <maz@...nel.org>,
        Alex Williamson <alex.williamson@...hat.com>,
        "Dey, Megha" <megha.dey@...el.com>,
        "Raj, Ashok" <ashok.raj@...el.com>,
        "linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Jon Mason <jdmason@...zu.us>, Allen Hubbe <allenbh@...il.com>,
        "linux-ntb@...glegroups.com" <linux-ntb@...glegroups.com>,
        "linux-s390@...r.kernel.org" <linux-s390@...r.kernel.org>,
        Heiko Carstens <hca@...ux.ibm.com>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        "x86@...nel.org" <x86@...nel.org>, Joerg Roedel <jroedel@...e.de>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>
Subject: RE: [patch 21/32] NTB/msi: Convert to msi_on_each_desc()

> From: Thomas Gleixner <tglx@...utronix.de>
> Sent: Friday, December 10, 2021 8:26 AM
> 
> On Thu, Dec 09 2021 at 23:09, Thomas Gleixner wrote:
> > On Thu, Dec 09 2021 at 16:58, Jason Gunthorpe wrote:
> >> Okay, I think I get it. Would be nice to have someone from intel
> >> familiar with the vIOMMU protocols and qemu code remark what the
> >> hypervisor side can look like.
> >>
> >> There is a bit more work here, we'd have to change VFIO to somehow
> >> entirely disconnect the kernel IRQ logic from the MSI table and
> >> directly pass control of it to the guest after the hypervisor IOMMU IR
> >> secures it. ie directly mmap the msi-x table into the guest
> >
> > That makes everything consistent and a clear cut on all levels, right?
> 
> Let me give a bit more rationale here, why I think this is the right
> thing to do. There are several problems with IMS both on the host and on
> the guest side:
> 
>   1) Contrary to MSI/MSI-X the address/data pair is not completely
>      managed by the core. It's handed off to driver writers in the
>      hope they get it right.
> 
>   2) Without interrupt remapping there is a fundamental issue on x86
>      for the affinity setting case, as there is no guarantee that
>      the magic protocol which we came up with (see msi_set_affinity()
>      in the x86 code) is correctly implemented at the driver level or
>      that the update is truly atomic so that the problem does not
>      arise. My interrest in chasing these things is exactly zero.
> 
>      With interrupt remapping the affinity change happens at the IRTE
>      level and not at the device level. It's a one time setup for the
>      device.

This is a good rationale for making IMS depend on IR on native.

> 
>      Just for the record:
> 
>      The ATH11 thing does not have that problem by pure luck because
>      multi-vector MSI is not supported on X86 unless interrupt
>      remapping is enabled.
> 
>      The switchtec NTB thing will fall apart w/o remapping AFAICT.
> 
>   3) With remapping the message for the device is constructed at
>      allocation time. It does not change after that because the affinity
>      change happens at the remapping level, which eliminates #2 above.
> 
>      That has another advantage for IMS because it does not require any
>      synchronization with the queue or whatever is involved. The next
>      interrupt after the change at the remapping level ends up on the
>      new target.

Yes

> 
>   4) For the guest side we agreed that we need an hypercall because the
>      host can't trap the write to the MSI[-X] entry anymore.

To be accurate I'd like to not call it "can't trap". The host still traps the 
MSI/MSI-X entry if the hypercall is not used. This is for current guest 
OS which doesn't have this hypercall mechanism. For future new guest
OS which will support this machinery then a handshake process from
such guest will disable the trap for MSI-X and map it for direct guest
access in the fly.

MSI has to be always trapped although the guest has acquired the right 
data/addr pair via the hypercall, since it's a PCI config capability.

> 
>      Aside of the fact that this creates a special case for IMS which is
>      undesirable in my opinion, it's not really obvious where the
>      hypercall should be placed to work for all scenarios so that it can
>      also solve the existing issue of silent failures.
> 
>   5) It's not possible for the kernel to reliably detect whether it is
>      running on bare metal or not. Yes we talked about heuristics, but
>      that's something I really want to avoid.

How would the hypercall mechanism avoid such heuristics?

> 
> When looking at the above I came to the conclusion that the consistent
> way is to make IMS depend on IR both on the host and the guest as this
> solves all of the above in one go.
> 
> How would that work? With IR the irqdomain hierarchy looks like this:
> 
>                    |--IO/APIC
>                    |--MSI
>     vector -- IR --|--MIX-X
>                    |--IMS
> 
> There are several context where this matters:
> 
>   1) Allocation of an interrupt, e.g. pci_alloc_irq_vectors().
> 
>   2) Activation of an interrupt which happens during allocation and/or
>      at request_irq() time
> 
>   3) Interrupt affinity setting
> 
> #1 Allocation
> 
>    That allocates an IRTE, which can fail
> 
> #2 Activation
> 
>    That's the step where actually a CPU vector is allocated, where the
>    IRTE is updated and where the device message is composed to target
>    the IRTE.
> 
>    On X86 activation is happening twice:
> 
>    1) During allocation it allocates a special CPU vector which is
>       handed out to all allocated interrupts. That's called reservation
>       mode. This was introduced to prevent vector exhaustion for two
>       cases:
> 
>        - Devices allocating tons of MSI-X vectors without using
>          them. That obviously needs to be fixed at the device driver
>          level, but due to the fact that post probe() allocation is not
>          supported, that's not always possible
> 
>        - CPU hotunplug
> 
>          All vectors targeting the outgoing CPU need to be migrated to a
>          new target CPU, which can result in exhaustion of the vector
>          space.
> 
>          Reservation mode avoids that because it just uses a unique
>          vector for all interrupts which are allocated but not
>          requested.
> 
>     2) On request_irq()
> 
>        As the vector assigned during allocation is just a place holder
>        to make the MSI hardware happy it needs to be replaced by a
>        real vector.
> 
>    Both can fail and the error is propagated through the call chain
> 
> #3 Changing the interrupt affinity
> 
>    This obviously needs to allocate a new target CPU vector and update
>    the IRTE.
> 
>    Allocating a new target CPU vector can fail.
> 
> When looking at it from the host side, then the host needs to do the
> same things:
> 
>   1) Allocate an IRTE for #1
> 
>   2) Update the IRTE for #2 and #3
> 
> But that does not necessarily mean that we need two hypercalls. We can
> get away with one in the code which updates the IRTE and that would be
> the point where the host side has to allocate the backing host
> interrupt, which would replace that allocate on unmask mechanism which
> is used today.
> 
> It might look awkward on first sight that an IRTE update can fail, but
> it's not that awkward when put into context:
> 
>   The first update happens during activation and activation can fail for
>   various reasons.
> 
> The charm is that his works for everything from INTx to IMS because all
> of them go through the same procedure, except that INTx (IO/APIC) does
> not support the reservation mode dance.
> 
> Thoughts?
> 

Above sounds the right direction to me. and here is more thought 
down the road.

First let's look at how the interrupt is delivered to the guest today.

With IR the physical interrupt is first delivered to the host, then 
converted into a virtual interrupt and finally injected to the guest.
Let's put posted interrupt aside since it's an optional platform capability.

    HW interrupt
        vfio_msihandler(): ->irqfd
            kvm_arch_set_irq_inatomic()
                kvm_set_msi_irq()
                kvm_irq_delivery_to_apic_fast()

Virtual interrupt injection is based on the virtual routing information 
registered by Qemu, via KVM_SET_GSI_ROUTING(gsi, routing_info). 

GSI is later associated to irqfd via KVM_IRQFD(irqfd, gsi). 

Qemu composes the virtual routing information based on trapping
of various interrupt storages (INTx, MSI, MSI-X, etc.). When IR is 
enabled in the vIOMMU, the routing info is composed by combining
the storage entry and vIRTE entry.

Now adding the new hypercall machinery to above flow, obviously
we also want the hypercall to carry the virtual routing information 
(vIRTE) to the host beside acquiring data/addr pair from the host. 
Without trap this information is now hidden from Qemu.

Then Qemu needs to find out the GSI number for the vIRTE handle. 
Again Qemu doesn't have such information since it doesn't know 
which MSI[-X] entry points to this handle due to no trap.

This implies that we may also need carry device ID, #msi entry, etc. 
in the hypercall, so Qemu can associate the virtual routing info
to the right [irqfd, gsi].

In your model the hypercall is raised by IR domain. Do you see
any problem of finding those information within IR domain?

Thanks
Kevin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ