[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251219170637.2c161b7b.alex@shazbot.org>
Date: Fri, 19 Dec 2025 17:06:37 -0700
From: Alex Williamson <alex@...zbot.org>
To: Ajay Garg <ajaygargnsit@...il.com>
Cc: QEMU Developers <qemu-devel@...gnu.org>,
iommu@...ts.linux-foundation.org, linux-pci@...r.kernel.org, Linux Kernel
Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: A lingering doubt on PCI-MMIO region of PCI-passthrough-device
On Fri, 19 Dec 2025 11:53:56 +0530
Ajay Garg <ajaygargnsit@...il.com> wrote:
> Hi Alex.
> Kindly help if the steps listed in the previous email are correct.
>
> (Have added qemu mailing-list too, as it might be a qemu thing too as
> virtual-pci is in picture).
>
> On Mon, Dec 15, 2025 at 9:20 AM Ajay Garg <ajaygargnsit@...il.com> wrote:
> >
> > Thanks Alex.
> >
> > So does something like the following happen :
> >
> > i)
> > During bootup, guest starts pci-enumeration as usual.
> >
> > ii)
> > Upon discovering the "passthrough-device", guest carves the physical
> > MMIO regions (as usual) in the guest's physical-address-space, and
> > starts-to/attempts to program the BARs with the
> > guest-physical-base-addresses carved out.
> >
> > iii)
> > These attempts to program the BARs (lying in the
> > "passthrough-device"'s config-space), are intercepted by the
> > hypervisor instead (causing a VM-exit in the interim).
> >
> > iv)
> > The hypervisor uses the above info to update the EPT, to ensure GPA =>
> > HPA conversions go fine when the guest tries to access the PCI-MMIO
> > regions later (once gurst is fully booted up). Also, the hypervisor
> > marks the operation as success (without "really" re-programming the
> > BARs).
> >
> > v)
> > The VM-entry is called, and the guest resumes with the "impression"
> > that the BARs have been "programmed by guest".
> >
> > Is the above sequencing correct at a bird's view level?
It's not far off. The key is simply that we can create a host virtual
mapping to the device BARs, ie. an mmap. The guest enumerates emulated
BARs, they're only used for sizing and locating the BARs in the guest
physical address space. When the guest BAR is programmed and memory
enabled, the address space in QEMU is populated at the BAR indicated
GPA using the mmap backing. KVM memory slots are used to fill the
mappings in the vCPU. The same BAR mmap is also used to provide DMA
mapping of the BAR through the IOMMU in the legacy type1 IOMMU backend
case. Barring a vIOMMU, the IOMMU IOVA space is the guest physical
address space. Thanks,
Alex
Powered by blists - more mailing lists