lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZkOFkfHhG2h2fv/c@nvidia.com>
Date: Tue, 14 May 2024 12:38:57 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: will@...nel.org, robin.murphy@....com, kevin.tian@...el.com,
	suravee.suthikulpanit@....com, joro@...tes.org,
	linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
	linux-arm-kernel@...ts.infradead.org, linux-tegra@...r.kernel.org,
	yi.l.liu@...el.com, eric.auger@...hat.com, vasant.hegde@....com,
	jon.grimm@....com, santosh.shukla@....com, Dhaval.Giani@....com,
	shameerali.kolothum.thodi@...wei.com
Subject: Re: [PATCH RFCv1 05/14] iommufd: Add IOMMUFD_OBJ_VIOMMU and
 IOMMUFD_CMD_VIOMMU_ALLOC

> > > +
> > > +/**
> > > + * enum iommu_viommu_type - VIOMMU Type
> > > + * @IOMMU_VIOMMU_TEGRA241_CMDQV: NVIDIA Tegra241 CMDQV Extension for SMMUv3
> > > + */
> > > +enum iommu_viommu_type {
> > > +	IOMMU_VIOMMU_TYPE_TEGRA241_CMDQV,
> > > +};
> > 
> > At least the 241 line should be in a following patch
> 
> It's for the "enum iommu_viommu_type" mentioned in the following
> structure. Yi told me that you don't like an empty enum, and he
> did something like this in HWPT_INVALIDATE series:
> https://lore.kernel.org/linux-iommu/20240111041015.47920-3-yi.l.liu@intel.com/

I suspect 0 should be reserved as a non-set value for some
basic sanity in all these driver type enums.

Jason

> > So, to make this all work perfectly we need approx the following
> >  - S2 sharing across instances in ARM - meaning the VMID is allocated
> >    at attach not domain alloc
> >  - S2 hwpt is refcounted by the VIOMMU in the iommufd layer
> >  - VIOMMU is refcounted by every nesting child in the iommufd layer
> >  - The nesting child holds a pointer to both the S2 and the VIOMMU
> >    (viommu optional)
> >  - When the nesting child attaches to a device the STE will source the
> >    VMID from the VIOMMU if present otherwise from the S2
> >  - "RID" attach (ie naked S2) will have to be done with a Nesting
> >    Child using a vSTE that indicates Identity. Then the attach logic
> >    will have enough information to get the VMID from the VIOMMU
> 
> What is this RID attach (naked S2) case? S1DSS_BYPASS + SVA?

No, when the guest installs a vSTE that simply says bypass with no CD
table pointer. That should result in a pSTE that is the S2 with on CD
pointer.

I was originally thinking that the VMM would simply directly attach
the S2 HWPT in this caes, but given the above issue with the VMID lifetime
it makes more sense to 'attach' the viommu which holds the correct
VMID. 

The issue with direct attach the S2 HWPT is the VMID lifetime, as it
would have to borrow the VMID from the viommu but then the lifetime
becomes more complex as it has to live beyond VIOMMU destruction. Not
unsolvable but it seems easier to just avoid it entirely.

> >  - In full VIOMMU mode the S2 will never get a VMID of its own, it
> >    will always use the VIOMMU. Life cycle is simple, the VMID is freed
> >    when the VIOMMU is freed. That can't happen until all Nesting
> >    Children are freed. That can't happen until all Nesting Children
> >    are detached from devices. Detatching removes the HW touch of the VMID.
> 
> So, each VM will have one S2 HWPT/domain/iopt, but each VM can
> have multiple VIOMMU instances sharing that single S2 HWPT, and
> each VIOMMU instance (in the SMMU driver at least) holds a vmid.

Yes, right. We really want to share the S2 across instances in the end
and I have made the VMID per-instance along with the per-instance
ASID. So the above sounds like it could work

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ