[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zk6zWLcPCc+nWICX@nvidia.com>
Date: Wed, 22 May 2024 20:09:12 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>, "Tian, Kevin" <kevin.tian@...el.com>
CC: "will@...nel.org" <will@...nel.org>, "robin.murphy@....com"
<robin.murphy@....com>, "suravee.suthikulpanit@....com"
<suravee.suthikulpanit@....com>, "joro@...tes.org" <joro@...tes.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, "linux-tegra@...r.kernel.org"
<linux-tegra@...r.kernel.org>, "Liu, Yi L" <yi.l.liu@...el.com>,
"eric.auger@...hat.com" <eric.auger@...hat.com>, "vasant.hegde@....com"
<vasant.hegde@....com>, "jon.grimm@....com" <jon.grimm@....com>,
"santosh.shukla@....com" <santosh.shukla@....com>, "Dhaval.Giani@....com"
<Dhaval.Giani@....com>, "shameerali.kolothum.thodi@...wei.com"
<shameerali.kolothum.thodi@...wei.com>
Subject: Re: [PATCH RFCv1 00/14] Add Tegra241 (Grace) CMDQV Support (part 2/2)
On Wed, May 22, 2024 at 11:43:51PM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@...dia.com>
> > Sent: Thursday, May 23, 2024 7:29 AM
> > On Wed, May 22, 2024 at 12:47:19PM -0700, Nicolin Chen wrote:
> > > Yea, SMMU also has Event Queue and PRI queue. Though I haven't
> > > got time to sit down to look at Baolu's work closely, the uAPI
> > > seems to be a unified one for all IOMMUs. And though I have no
> > > intention to be against that design, yet maybe there could be
> > > an alternative in a somewhat HW specific language as we do for
> > > invalidation? Or not worth it?
> >
> > I was thinking not worth it, I expect a gain here is to do as AMD has
> > done and make the HW dma the queues directly to guest memory.
> >
> > IMHO the primary issue with the queues is DOS, as having any shared
> > queue across VMs is dangerous in that way. Allowing each VIOMMU to
> > have its own private queue and own flow control helps with that.
> >
>
> and also shorter delivering path with less data copy?
Should I interpret that as a yes for fault report via VQUEUE?
We only have AMD that can HW dma the events to the guest queue
memory. Others all need a backward translation of (at least) a
physical dev ID to a virtual dev ID. This is now doable in the
kernel by the ongoing vdev_id design by the way. So kernel then
can write the guest memory directly to report events?
Thanks
Nicolin
Powered by blists - more mailing lists