lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250920083441.306d58d0.alex.williamson@redhat.com>
Date: Sat, 20 Sep 2025 08:34:41 -0600
From: Alex Williamson <alex.williamson@...hat.com>
To: Ajay Garg <ajaygargnsit@...il.com>
Cc: iommu@...ts.linux-foundation.org, linux-pci@...r.kernel.org, Linux
 Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: How are iommu-mappings set up in guest-OS for
 dma_alloc_coherent

On Sat, 20 Sep 2025 08:34:41 +0530
Ajay Garg <ajaygargnsit@...il.com> wrote:

> > If the VM is configured without a vIOMMU or the vIOMMU is inactive in
> > the guest, all of the guest physical memory is pinned and mapped
> > through the physical IOMMU when the guest is started.  Nothing happens
> > regarding the IOMMU when a coherent mapping is created in the guest,
> > it's already setup.
> >  
> 
> Thanks Alex.
> 
> Another doubt pops up for this scenario.
> 
> Let's take a host-OS, with two guess-OSes spawned up (we can take
> everything to be x86_64 for simplicity).
> Guest G1 has PCI-device-1 attached to it; Guest G2 has PCI-device-2
> attached to it.
> 
> a)
> We do "dma_alloc_coherent" in G1, which returns GVA1 (CPU
> virtual-address) and GIOVA1 (Device-bus virtual-address).
> Since vIOMMU is not exposed in guest, so GIOVA1 will/can be equal to
> GPA1 (physical-address).
> 
> This GIOVA1 (== GPA1) will be programmed to the PCI-device-1's
> BAR-register to set up DMA.
> 
> b)
> Similarly, we do "dma_alloc_coherent" in G2, and program GIOVA2 (==
> GPA2) to PCI-device-2's BAR-register to set up DMA.
> 
> c)
> At this point, the physical/host IOMMU will contain mappings for :
> 
>     GIOVA1 => HPA1
>     GIOVA2 => HPA2
> 
> We take "sufficiently" multi-core systems, so that both guests could
> be running concurrently, and HPA1 != HPA2 generally.
> However, since both guests are running independently, we could very
> well land in the situation where
> 
>     GIOVA1 == GIOVA2 (== GPA1 == GPA2).
> 
> How do we handle such conflicts?
> Does x86-IOMMU-PASID come to the rescue here (implicitly meaning that
> PCI-device-1 and PCI-device-2 """"must"""" be PASID capable)?

No, each device has a unique Requester ID (RID).  The IOMMU page tables
are first indexed by the RID, therefore two devices making use of the
same IOVA will use separate page tables resulting in unique HPAs.
PASID provides another level of page table lookup that is not necessary
in the scenario described.  Thanks,

Alex


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ