[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251028103945.0000716e@linux.microsoft.com>
Date: Tue, 28 Oct 2025 10:39:45 -0700
From: Jacob Pan <jacob.pan@...ux.microsoft.com>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: Vipin Sharma <vipinsh@...gle.com>, bhelgaas@...gle.com,
alex.williamson@...hat.com, pasha.tatashin@...een.com, dmatlack@...gle.com,
graf@...zon.com, pratyush@...nel.org, gregkh@...uxfoundation.org,
chrisl@...nel.org, rppt@...nel.org, skhawaja@...gle.com, parav@...dia.com,
saeedm@...dia.com, kevin.tian@...el.com, jrhilke@...gle.com,
david@...hat.com, jgowans@...zon.com, dwmw2@...radead.org,
epetron@...zon.de, junaids@...gle.com, linux-kernel@...r.kernel.org,
linux-pci@...r.kernel.org, kvm@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: Re: [RFC PATCH 06/21] vfio/pci: Accept live update preservation
request for VFIO cdev
On Tue, 28 Oct 2025 10:28:55 -0300
Jason Gunthorpe <jgg@...pe.ca> wrote:
> On Mon, Oct 27, 2025 at 01:44:30PM -0700, Jacob Pan wrote:
> > I have a separate question regarding noiommu devices. I’m currently
> > working on adding noiommu mode support for VFIO cdev under iommufd.
> >
>
> Oh how is that going? I was just thinking about that again..
>
I initially tried to create a special VFIO no-iommu iommu_domain
without an iommu driver, but I found it difficult without iommu_group
and other machinery. I also had a special vfio_device_ops
vfio_pci_noiommu_ops with special vfio_iommufd_noiommu_bind to create
iommufd_acess object as in Yi's original patch.
My current approach is that I have a special noiommu driver that handles
the special iommu_domain. It seems much cleaner though some extra code
overhead. I have a working prototype that has:
# tree /dev/vfio/
/dev/vfio/
|-- 7
|-- devices
| `-- noiommu-vfio0
`-- vfio
And the typical:
/sys/class/iommu/noiommu/
|-- devices
| |-- 0000:00:00.0 -> ../../../../pci0000:00/0000:00:00.0
| |-- 0000:00:01.0 -> ../../../../pci0000:00/0000:00:01.0
| |-- 0000:00:02.0 -> ../../../../pci0000:00/0000:00:02.0
| |-- 0000:00:03.0 -> ../../../../pci0000:00/0000:00:03.0
| |-- 0000:00:04.0 -> ../../../../pci0000:00/0000:00:04.0
| |-- 0000:00:05.0 -> ../../../../pci0000:00/0000:00:05.0
| |-- 0000:01:00.0 -> ../../../../pci0000:00/0000:00:04.0/0000:0
The following user test can pass:
1. __iommufd = open("/dev/iommu", O_RDWR);
2. devfd = open a noiommu cdev
3. ioas_id = ioas_alloc(__iommufd)
4. iommufd_bind(__iommufd, devfd)
5. successfully do an ioas map, e.g.
ioctl(iommufd, IOMMU_IOAS_MAP, &map)
This will call pfn_reader_user_pin() but the noiommu driver does
nothing for mapping.
I am still debugging some cases, would like to have a direction check
before going too far.
> After writing the generic pt self test it occured to me we now have
> enough infrastructure for iommufd to internally create its own
> iommu_domain with a AMDv1 page table for the noiommu devices. It would
> then be so easy to feed that through the existing machinery and have
> all the pinning/etc work.
>
Could you elaborate a little more? noiommu devices don't have page
tables. Are you saying iommufd can create its own iommu_domain w/o a
vendor iommu driver? Let me catch up with your v7 :)
> Then only an ioctl to read back the physical addresses from this
> special domain would be needed
>
Yes, that was part of your original suggestion to avoid /proc pagemap.
I have not added that yet. Do you think this warrant a new ioctl or
just return it in
struct iommu_ioas_map map = {
.size = sizeof(map),
.flags = IOMMU_IOAS_MAP_READABLE,
.ioas_id = ioas_id,
.iova = iova,
.user_va = uvaddr,
.length = size,
};
> It actually sort of feels pretty easy..
>
> Jason
Powered by blists - more mailing lists