[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210902144524.GU1721383@nvidia.com>
Date: Thu, 2 Sep 2021 11:45:24 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: "Tian, Kevin" <kevin.tian@...el.com>
Cc: Alex Williamson <alex.williamson@...hat.com>,
Nicolin Chen <nicolinc@...dia.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"thierry.reding@...il.com" <thierry.reding@...il.com>,
"will@...nel.org" <will@...nel.org>,
"jean-philippe@...aro.org" <jean-philippe@...aro.org>,
"corbet@....net" <corbet@....net>,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"cohuck@...hat.com" <cohuck@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"robin.murphy@....com" <robin.murphy@....com>
Subject: Re: [RFC][PATCH v2 00/13] iommu/arm-smmu-v3: Add NVIDIA
implementation
On Wed, Sep 01, 2021 at 06:55:55AM +0000, Tian, Kevin wrote:
> > From: Alex Williamson
> > Sent: Wednesday, September 1, 2021 12:16 AM
> >
> > On Mon, 30 Aug 2021 19:59:10 -0700
> > Nicolin Chen <nicolinc@...dia.com> wrote:
> >
> > > The SMMUv3 devices implemented in the Grace SoC support NVIDIA's
> > custom
> > > CMDQ-Virtualization (CMDQV) hardware. Like the new ECMDQ feature first
> > > introduced in the ARM SMMUv3.3 specification, CMDQV adds multiple
> > VCMDQ
> > > interfaces to supplement the single architected SMMU_CMDQ in an effort
> > > to reduce contention.
> > >
> > > This series of patches add CMDQV support with its preparational changes:
> > >
> > > * PATCH-1 to PATCH-8 are related to shared VMID feature: they are used
> > > first to improve TLB utilization, second to bind a shared VMID with a
> > > VCMDQ interface for hardware configuring requirement.
> >
> > The vfio changes would need to be implemented in alignment with the
> > /dev/iommu proposals[1]. AIUI, the VMID is essentially binding
> > multiple containers together for TLB invalidation, which I expect in
> > the proposal below is largely already taken care of in that a single
> > iommu-fd can support multiple I/O address spaces and it's largely
> > expected that a hypervisor would use a single iommu-fd so this explicit
> > connection by userspace across containers wouldn't be necessary.
>
> Agree. VMID is equivalent to DID (domain id) in other vendor iommus.
> with /dev/iommu multiple I/O address spaces can share the same VMID
> via nesting. No need of exposing VMID to userspace to build the
> connection.
Indeed, this looks like a flavour of the accelerated invalidation
stuff we've talked about already.
I would see it probably exposed as some HW specific IOCTL on the iommu
fd to get access to the accelerated invalidation for IOASID's in the
FD.
Indeed, this seems like a further example of why /dev/iommu is looking
like a good idea as this RFC is very complicated to do something
fairly simple.
Where are thing on the /dev/iommu work these days?
Thanks,
Jason
Powered by blists - more mailing lists