[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN9PR11MB5276B96470F3BF1E7077CC018C5DA@BN9PR11MB5276.namprd11.prod.outlook.com>
Date: Wed, 21 Jun 2023 08:16:52 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: Robin Murphy <robin.murphy@....com>,
Alex Williamson <alex.williamson@...hat.com>,
Baolu Lu <baolu.lu@...ux.intel.com>,
"Alexander Duyck" <alexander.duyck@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-pci <linux-pci@...r.kernel.org>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>
Subject: RE: Question about reserved_regions w/ Intel IOMMU
> From: Jason Gunthorpe <jgg@...dia.com>
> Sent: Friday, June 16, 2023 8:21 PM
>
> On Fri, Jun 16, 2023 at 08:39:46AM +0000, Tian, Kevin wrote:
> > +Alex
> >
> > > From: Jason Gunthorpe <jgg@...dia.com>
> > > Sent: Tuesday, June 13, 2023 11:54 PM
> > >
> > > On Thu, Jun 08, 2023 at 04:28:24PM +0100, Robin Murphy wrote:
> > >
> > > > > The iova_reserve_pci_windows() you've seen is for kernel DMA
> interfaces
> > > > > which is not related to peer-to-peer accesses.
> > > >
> > > > Right, in general the IOMMU driver cannot be held responsible for
> > > whatever
> > > > might happen upstream of the IOMMU input.
> > >
> > > The driver yes, but..
> > >
> > > > The DMA layer carves PCI windows out of its IOVA space
> > > > unconditionally because we know that they *might* be problematic,
> > > > and we don't have any specific constraints on our IOVA layout so
> > > > it's no big deal to just sacrifice some space for simplicity.
> > >
> > > This is a problem for everything using UNMANAGED domains. If the
> iommu
> > > API user picks an IOVA it should be able to expect it to work. If the
> > > intereconnect fails to allow it to work then this has to be discovered
> > > otherwise UNAMANGED domains are not usable at all.
> > >
> > > Eg vfio and iommufd are also in trouble on these configurations.
> > >
> >
> > If those PCI windows are problematic e.g. due to ACS they belong to
> > a single iommu group. If a vfio user opens all the devices in that group
> > then it can discover and reserve those windows in its IOVA space.
>
> How? We don't even exclude the single device's BAR if there is no ACS?
I thought the initial vbar value in vfio is copied from physical BAR so
the user may check this value to skip. But it's informal and looks
today Qemu doesn't compose the GPA layout with any information
from there.
>
> > The problem is that the user may not open all the devices then
> > currently there is no way for it to know the windows on those
> > unopened devices.
> >
> > Curious why nobody complains about this gap before this thread...
>
> Probably because it only matters if you have a real PCIe switch in the
> system, which is pretty rare.
>
multi-devices group might not be rare given vfio has spent so many
effort to manage it.
More likely the virtual bios may reserve a big enough hole between
[3GB, 4GB] which happens to cover the physical BARs (if not 64bit)
in the group to avoid conflict, e.g.:
c0000000-febfffff : PCI Bus 0000:00
fd000000-fdffffff : 0000:00:01.0
fd000000-fdffffff : bochs-drm
fe000000-fe01ffff : 0000:00:02.0
fe020000-fe02ffff : 0000:00:02.0
fe030000-fe033fff : 0000:00:03.0
fe030000-fe033fff : virtio-pci-modern
feb80000-febbffff : 0000:00:03.0
febd0000-febd0fff : 0000:00:01.0
febd0000-febd0fff : bochs-drm
febd1000-febd1fff : 0000:00:03.0
febd2000-febd2fff : 0000:00:1f.2
febd2000-febd2fff : ahci
fec00000-fec003ff : IOAPIC 0
fed00000-fed003ff : HPET 0
fed00000-fed003ff : PNP0103:00
fed1c000-fed1ffff : Reserved
fed1f410-fed1f414 : iTCO_wdt.0.auto
fed90000-fed90fff : dmar0
fee00000-fee00fff : Local APIC
feffc000-feffffff : Reserved
fffc0000-ffffffff : Reserved
Powered by blists - more mailing lists