lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 2 Jun 2024 20:25:34 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: "Tian, Kevin" <kevin.tian@...el.com>, "will@...nel.org" <will@...nel.org>,
	"robin.murphy@....com" <robin.murphy@....com>,
	"suravee.suthikulpanit@....com" <suravee.suthikulpanit@....com>,
	"joro@...tes.org" <joro@...tes.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "iommu@...ts.linux.dev"
	<iommu@...ts.linux.dev>, "linux-arm-kernel@...ts.infradead.org"
	<linux-arm-kernel@...ts.infradead.org>, "linux-tegra@...r.kernel.org"
	<linux-tegra@...r.kernel.org>, "Liu, Yi L" <yi.l.liu@...el.com>,
	"eric.auger@...hat.com" <eric.auger@...hat.com>, "vasant.hegde@....com"
	<vasant.hegde@....com>, "jon.grimm@....com" <jon.grimm@....com>,
	"santosh.shukla@....com" <santosh.shukla@....com>, "Dhaval.Giani@....com"
	<Dhaval.Giani@....com>, "shameerali.kolothum.thodi@...wei.com"
	<shameerali.kolothum.thodi@...wei.com>
Subject: Re: [PATCH RFCv1 08/14] iommufd: Add IOMMU_VIOMMU_SET_DEV_ID ioctl

On Sat, Jun 01, 2024 at 06:45:01PM -0300, Jason Gunthorpe wrote:
> On Wed, May 29, 2024 at 05:58:39PM -0700, Nicolin Chen wrote:
> > On Thu, May 30, 2024 at 12:28:43AM +0000, Tian, Kevin wrote:
> > > > From: Nicolin Chen <nicolinc@...dia.com>
> > > > Sent: Wednesday, May 29, 2024 11:21 AM
> > > > On Wed, May 29, 2024 at 02:58:11AM +0000, Tian, Kevin wrote:
> > > > > My question is why that option is chosen instead of going with 1:1
> > > > > mapping between vSMMU and viommu i.e. letting the kernel to
> > > > > figure out which pSMMU should be sent an invalidation cmd to, as
> > > > > how VT-d is virtualized.
> > > > >
> > > > > I want to know whether doing so is simply to be compatible with
> > > > > what VCMDQ requires, or due to another untold reason.
> > > >
> > > > Because we use viommu as a VMID holder for SMMU. So a pSMMU must
> > > > have its own viommu to store its VMID for a shared s2_hwpt:
> > > >         |-- viommu0 (VMIDx) --|-- pSMMU0 --|
> > > >  vSMMU--|-- viommu1 (VMIDy) --|-- pSMMU1 --|--s2_hwpt
> > > >         |-- viommu2 (VMIDz) --|-- pSMMU2 --|
> > > >
> > > 
> > > there are other options, e.g. you can have one viommu holding multiple
> > > VMIDs each associating to a pSMMU.
> > 
> > Well, possibly. But everything previously in a viommu would have
> > to be a list (for number of VMIDs) in the kernel level: not only
> > a VMID list, but also a 2D virtual ID lists, something like:
> > 
> > struct xarray vdev_ids[num_of_vmid]; // per-IOMMU vID to pID lookup
> 
> I feel it makes most sense that ARM (and maybe everyone) just have a
> viommu per piommu.
> 
> The main argument against is we haven't made it efficient for the VMM
> to support multiple piommus. It has to do a system call per piommu
> each time it processes the cmdq.
> 
> But, on the other hand, if you care about invalidation efficiency it
> is kind of silly not to expose the piommus to the guest so that the
> invalidation scope can be appropriately narrowed. Replicating all ASID
> invalidations to all piommus doesn't make alot of sense if the guest
> can know that only one piommu actually needs invalidation.

Yea, that'd be pretty slow, though a broadcast would be still
inevitable when an invalidation only has an address range w/o
an ASID, CMD_TLBI_NH_VAA for example.

In fact, there should always be a dispatcher (v.s. broadcast):
 - in the one-viommu-per-pIOMMU case (#1), it's in the VMM
 - in the one-viommu-per-vIOMMU case (#2), it's in the kernel

One of them has to take the role to burn some CPUs for-eaching
the hwpt list to identify the iommu to forward. The design #1,
simply makes the kernel easier.

The design #2, on the other hand, would not only require some
lists and new objects that we just discussed, yet also pair of
VIOMMU_SET/UNSET_HWPT_ID ioctls, though it also makes sense as
we choose IOMMU_VIOMMU_INVALIDATE over IOMMU_DEV_INVALIDATE by
adding VIOMMU_SET/UNSET_VDEV_ID?

> > And a driver in this case would be difficult to get a complete
> > concept of a viommu object since it's core object and shared by
> > all kernel-level IOMMU instances. If a driver wants to extend a
> > viommu object for some additional feature, e.g. VINTF in this
> > series, it would likely have to create another per-driver object
> > and again another list of this kind of objects in struct viommu.
> 
> Right, we need some kind of per-piommu object because we have
> per-piommu data.
>
> > Oh. With regular nested SMMU, there is only one virtual SMMU in
> > the guest VM. No need of copying physical topology. Just the VMM
> > needs to allocate three viommus to add them to a list of its own.
> 
> I understand the appeal of doing this has been to minimize qemu
> changes in its ACPI parts if we tackle that instead maybe we should
> just not implement viommu to multiple piommu. It is somewhat
> complicated.

Would you please clarify that suggestion "not implement viommu
to multiple piommu"?

For regular nesting (SMMU), we are still doing one vSMMU in the
VMM, though VCMDQ case would be an exception....

Thanks
Nicolin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ