[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240522164623.GA20229@nvidia.com>
Date: Wed, 22 May 2024 13:46:23 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: will@...nel.org, robin.murphy@....com, kevin.tian@...el.com,
suravee.suthikulpanit@....com, joro@...tes.org,
linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
linux-arm-kernel@...ts.infradead.org, linux-tegra@...r.kernel.org,
yi.l.liu@...el.com, eric.auger@...hat.com, vasant.hegde@....com,
jon.grimm@....com, santosh.shukla@....com, Dhaval.Giani@....com,
shameerali.kolothum.thodi@...wei.com
Subject: Re: [PATCH RFCv1 05/14] iommufd: Add IOMMUFD_OBJ_VIOMMU and
IOMMUFD_CMD_VIOMMU_ALLOC
On Tue, May 21, 2024 at 05:13:50PM -0700, Nicolin Chen wrote:
> Yea. VMM is always allowed to create a viommu to wrap an S2
> HWPT. Then, I assume iommufd in this case should allocate a
> viommu object if !domain_ops->viommu_alloc.
Yeah
> On one side, it may not be straightforward for a qemu viommu
> driver to hold a shared S2 hwpt, as the driver is typically
> per instance, though I think it can keep viommu to its own.
> So passing the S2 hwpt back to qemu core and tie to iommufd
> handler (ictx) makes sense.
Yes, qemu will need some per-driver-type but not per-instance storage
to make this work. Ie the ARM per-driver-type shared storage would
hold the ARM specific list of S2 hwpts.
> On the other side, there can be some future HW potentially
> supporting two+ kinds of IO page tables so a VM may have two+
> S2 hwpts? Then the core would hold a list of S2 hwpts and the
> viommu driver would need to try-n-allocate viommu against the
> list..
Yes, it is supported in the API. Userspace should try to create
viommus with all the S2 hwpts available and build a new one if it
can't, just like hwpt attachment to a device.
Jason
Powered by blists - more mailing lists