lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240809224910.GM8378@nvidia.com>
Date: Fri, 9 Aug 2024 19:49:10 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: Robin Murphy <robin.murphy@....com>,
	"Tian, Kevin" <kevin.tian@...el.com>,
	"joro@...tes.org" <joro@...tes.org>,
	"will@...nel.org" <will@...nel.org>,
	"shuah@...nel.org" <shuah@...nel.org>,
	"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH v2 2/3] iommu/dma: Support MSIs through nested domains

On Fri, Aug 09, 2024 at 12:18:42PM -0700, Nicolin Chen wrote:

> > The bigger issue is that we still have the hypervisor GIC driver
> > controlling things and it will need to know to use the guest provided
> > MSI address captured during the MSI trap, not its own address. I don't
> > have an idea how to connect those two parts yet.
> 
> You mean the gPA of the vITS v.s. PA of the ITS, right? I think
> that's because only VMM knows the virtual IRQ number to insert?
> We don't seem to have a choice for that unless we want to poke
> a hole to the vGIC design..

I mean what you explained in your other email:

> - MSI configuration in the host (via a VFIO_IRQ_SET_ACTION_TRIGGER
>   hypercall) should set gIOVA instead of fetching from msi_cookie.
>   That hypercall doesn't forward an address currently, since host
>   kernel pre-sets the msi_cookie. So, we need a way to forward the
>   gIOVA to kernel and pack it into the msi_msg structure. I haven't
>   read the VFIO PCI code thoroughly, yet wonder if we could just
>   let the guest program the gIOVA to the PCI register and fall it
>   through to the hardware, so host kernel handling that hypercall
>   can just read it back from the register?

We still need to convay the MSI-X address from the register trap into
the kernel and use the VM supplied address instead of calling
iommu_dma_compose_msi_msg().

When you did your test you may have lucked out that the guest was
putting the ITS at the same place the host kernel expected because
they are both running the same software and making the same 
decision :)

Maybe take a look at what pushing the address down through the
VFIO_IRQ_SET_ACTION_TRIGGER would look like?

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ