lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240523124806.GK20229@nvidia.com>
Date: Thu, 23 May 2024 09:48:06 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: "Tian, Kevin" <kevin.tian@...el.com>,
	"will@...nel.org" <will@...nel.org>,
	"robin.murphy@....com" <robin.murphy@....com>,
	"suravee.suthikulpanit@....com" <suravee.suthikulpanit@....com>,
	"joro@...tes.org" <joro@...tes.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
	"linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>,
	"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
	"Liu, Yi L" <yi.l.liu@...el.com>,
	"eric.auger@...hat.com" <eric.auger@...hat.com>,
	"vasant.hegde@....com" <vasant.hegde@....com>,
	"jon.grimm@....com" <jon.grimm@....com>,
	"santosh.shukla@....com" <santosh.shukla@....com>,
	"Dhaval.Giani@....com" <Dhaval.Giani@....com>,
	"shameerali.kolothum.thodi@...wei.com" <shameerali.kolothum.thodi@...wei.com>
Subject: Re: [PATCH RFCv1 00/14] Add Tegra241 (Grace) CMDQV Support (part 2/2)

On Wed, May 22, 2024 at 08:09:12PM -0700, Nicolin Chen wrote:
> On Wed, May 22, 2024 at 11:43:51PM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@...dia.com>
> > > Sent: Thursday, May 23, 2024 7:29 AM
> > > On Wed, May 22, 2024 at 12:47:19PM -0700, Nicolin Chen wrote:
> > > > Yea, SMMU also has Event Queue and PRI queue. Though I haven't
> > > > got time to sit down to look at Baolu's work closely, the uAPI
> > > > seems to be a unified one for all IOMMUs. And though I have no
> > > > intention to be against that design, yet maybe there could be
> > > > an alternative in a somewhat HW specific language as we do for
> > > > invalidation? Or not worth it?
> > >
> > > I was thinking not worth it, I expect a gain here is to do as AMD has
> > > done and make the HW dma the queues directly to guest memory.
> > >
> > > IMHO the primary issue with the queues is DOS, as having any shared
> > > queue across VMs is dangerous in that way. Allowing each VIOMMU to
> > > have its own private queue and own flow control helps with that.
> > >
> > 
> > and also shorter delivering path with less data copy?
> 
> Should I interpret that as a yes for fault report via VQUEUE?
> 
> We only have AMD that can HW dma the events to the guest queue
> memory. Others all need a backward translation of (at least) a
> physical dev ID to a virtual dev ID. This is now doable in the
> kernel by the ongoing vdev_id design by the way. So kernel then
> can write the guest memory directly to report events?

I don't think we should get into kernel doing direct access at this
point, lets focus on basic functionality before we get to
microoptimizations like that.

So long as the API could support doing something like that it could be
done after benchmarking/etc.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ