[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8735rhbdxp.wl-maz@kernel.org>
Date: Tue, 10 Aug 2021 14:57:06 +0100
From: Marc Zyngier <maz@...nel.org>
To: Sunil Muthuswamy <sunilmut@...rosoft.com>
Cc: Robin Murphy <robin.murphy@....com>,
Thomas Gleixner <tglx@...utronix.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"will@...nel.org" <will@...nel.org>,
Michael Kelley <mikelley@...rosoft.com>,
Boqun Feng <Boqun.Feng@...rosoft.com>,
KY Srinivasan <kys@...rosoft.com>,
Arnd Bergmann <arnd@...db.de>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>
Subject: Re: [EXTERNAL] Re: [RFC 1/1] irqchip/gic-v3-its: Add irq domain and chip for Direct LPI without ITS
On Tue, 10 Aug 2021 02:10:40 +0100,
Sunil Muthuswamy <sunilmut@...rosoft.com> wrote:
>
> On Monday, August 9, 2021 2:15 AM,
> Marc Zyngier <maz@...nel.org> wrote:
> [...]
> > If you plug directly into the GICv3 layer, I'd rather you inject SPIs,
> > just like any other non-architectural MSI controller. You can directly
> > interface with the ACPI GSI layer for that, without any need to mess
> > with the GICv3 internals. The SPI space isn't very large, but still
> > much larger than the equivalent x86 space (close to 1000).
> >
> > If time is of the essence, I suggest you go the SPI way. For anything
> > involving LPIs, I really want to see a firmware spec that works for
> > everyone as opposed to a localised Hyper-V hack.
> >
> Ok, thanks. Before we commit to anything, I would like to make sure
> that I am on the same page in terms of your description. With that in
> mind, I have few questions. Hopefully, these should settle the matter.
> 1. If we go with the SPI route, then the way I envision it is that the
> Hyper-V vPCI driver will implement an IRQ chip, which will take
> care of allocating & managing the SPI interrupt for Hyper-V vPCI.
> This IRQ chip will parent itself to the architectural GIC IRQ chip for
> general interrupt management. Does that match with your
> understanding/suggestion as well?
Yes.
>
> 2. In the above, how will Hyper-V vPCI module discover the
> architectural GIC IRQ domain generically for virtual devices that
> are not firmware enumerated? Today, the GIC v3 IRQ domain is
> not exported and the general 'irq_find_xyz' APIs only work for
> firmware enumerated devices (i.e. something that has a fwnode
> handle).
You don't need to discover it with ACPI. You simply instantiate your
own irqdomain using acpi_irq_create_hierarchy(), which will do the
right thing. Your PCI driver will have to create its own fwnode out of
thin air (there is an API for that), and call into this function to
plumb everything.
>
> 3. Longer term, if we implement LPIs (with an ITS or Direct LPI), to
> be able to support all scenarios such as Live Migration, the
> Hyper-V virtual PCI driver would like to be able to control the
> MSI address & data that gets programmed on the device
> (i.e. .irq_compose_msi_msg). We can use the architectural
> methods for everything else. Does that fit into the realm of
> what would be acceptable upstream?
I cannot see how this works. The address has to match that of the
virtual HW you target (whether this is a redistributor or an ITS), and
the data is only meaningful in that context. And it really shouldn't
matter at all, as I expect you don't let the guest directly write to
the PCI MSI-X table.
If you let the guest have access direct to that table (which seems to
contradict your "live migration" argument), then your best bet is to
use provide a skeletal IOMMU implementation, and get
iommu_dma_compose_msi_msg() to do the remapping. But frankly, that's
horrible and I fully expect the IOMMU people to push back (and that
still doesn't give you any control over the data, only the address).
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists