[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zraobe/Q9v3nsnmS@Asurada-Nvidia>
Date: Fri, 9 Aug 2024 16:38:21 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: Robin Murphy <robin.murphy@....com>, "Tian, Kevin" <kevin.tian@...el.com>,
"joro@...tes.org" <joro@...tes.org>, "will@...nel.org" <will@...nel.org>,
"shuah@...nel.org" <shuah@...nel.org>, "iommu@...ts.linux.dev"
<iommu@...ts.linux.dev>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "linux-kselftest@...r.kernel.org"
<linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH v2 2/3] iommu/dma: Support MSIs through nested domains
On Fri, Aug 09, 2024 at 07:49:10PM -0300, Jason Gunthorpe wrote:
> On Fri, Aug 09, 2024 at 12:18:42PM -0700, Nicolin Chen wrote:
>
> > > The bigger issue is that we still have the hypervisor GIC driver
> > > controlling things and it will need to know to use the guest provided
> > > MSI address captured during the MSI trap, not its own address. I don't
> > > have an idea how to connect those two parts yet.
> >
> > You mean the gPA of the vITS v.s. PA of the ITS, right? I think
> > that's because only VMM knows the virtual IRQ number to insert?
> > We don't seem to have a choice for that unless we want to poke
> > a hole to the vGIC design..
>
> I mean what you explained in your other email:
>
> > - MSI configuration in the host (via a VFIO_IRQ_SET_ACTION_TRIGGER
> > hypercall) should set gIOVA instead of fetching from msi_cookie.
> > That hypercall doesn't forward an address currently, since host
> > kernel pre-sets the msi_cookie. So, we need a way to forward the
> > gIOVA to kernel and pack it into the msi_msg structure. I haven't
> > read the VFIO PCI code thoroughly, yet wonder if we could just
> > let the guest program the gIOVA to the PCI register and fall it
> > through to the hardware, so host kernel handling that hypercall
> > can just read it back from the register?
>
> We still need to convay the MSI-X address from the register trap into
> the kernel and use the VM supplied address instead of calling
> iommu_dma_compose_msi_msg().
Yes.
> When you did your test you may have lucked out that the guest was
> putting the ITS at the same place the host kernel expected because
> they are both running the same software and making the same
> decision :)
Oh, the devil's in the details: I hard-coded every address in the
vITS's 2-stage mapping lol
> Maybe take a look at what pushing the address down through the
> VFIO_IRQ_SET_ACTION_TRIGGER would look like?
Yea, there's a data field and we can add a new flag to define the
format/type. Will take a closer look.
Thanks
Nicolin
Powered by blists - more mailing lists