lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZkOJgvMNFaZZ06OO@nvidia.com>
Date: Tue, 14 May 2024 12:55:46 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: will@...nel.org, robin.murphy@....com, kevin.tian@...el.com,
	suravee.suthikulpanit@....com, joro@...tes.org,
	linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
	linux-arm-kernel@...ts.infradead.org, linux-tegra@...r.kernel.org,
	yi.l.liu@...el.com, eric.auger@...hat.com, vasant.hegde@....com,
	jon.grimm@....com, santosh.shukla@....com, Dhaval.Giani@....com,
	shameerali.kolothum.thodi@...wei.com
Subject: Re: [PATCH RFCv1 04/14] iommufd: Add struct iommufd_viommu and
 iommufd_viommu_ops

On Sun, May 12, 2024 at 08:34:02PM -0700, Nicolin Chen wrote:
> On Sun, May 12, 2024 at 11:03:53AM -0300, Jason Gunthorpe wrote:
> > On Fri, Apr 12, 2024 at 08:47:01PM -0700, Nicolin Chen wrote:
> > > Add a new iommufd_viommu core structure to represent a vIOMMU instance in
> > > the user space, typically backed by a HW-accelerated feature of an IOMMU,
> > > e.g. NVIDIA CMDQ-Virtualization (an ARM SMMUv3 extension) and AMD Hardware
> > > Accelerated Virtualized IOMMU (vIOMMU).
> > 
> > I expect this will also be the only way to pass in an associated KVM,
> > userspace would supply the kvm when creating the viommu.
> > 
> > The tricky bit of this flow is how to manage the S2. It is necessary
> > that the S2 be linked to the viommu:
> > 
> >  1) ARM BTM requires the VMID to be shared with KVM
> >  2) AMD and others need the S2 translation because some of the HW
> >     acceleration is done inside the guest address space
> > 
> > I haven't looked closely at AMD but presumably the VIOMMU create will
> > have to install the S2 into a DID or something?
> > 
> > So we need the S2 to exist before the VIOMMU is created, but the
> > drivers are going to need some more fixing before that will fully
> > work.
> > 
> > Does the nesting domain create need the viommu as well (in place of
> > the S2 hwpt)? That feels sort of natural.
> 
> Yes, I had a similar thought initially: each viommu is backed by
> a nested IOMMU HW, and a special HW accelerator like VCMDQ could
> be treated as an extension on top of that. It might not be very
> straightforward like the current design having vintf<->viommu and
> vcmdq <-> vqueue though...

vqueue should be considered a sub object of the viommu and hold a
refcount on the viommu object for its lifetime.
 
> In that case, we can then support viommu_cache_invalidate, which
> is quite natural for SMMUv3. Yet, I recall Kevin said that VT-d
> doesn't want or need that.

Right, Intel currently doesn't need it, but I feel like everyone will
need this eventually as the fast invalidation path is quite important.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ