[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240923183447.GH9417@nvidia.com>
Date: Mon, 23 Sep 2024 15:34:47 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: "Tian, Kevin" <kevin.tian@...el.com>
Cc: Nicolin Chen <nicolinc@...dia.com>, "will@...nel.org" <will@...nel.org>,
"joro@...tes.org" <joro@...tes.org>,
"suravee.suthikulpanit@....com" <suravee.suthikulpanit@....com>,
"robin.murphy@....com" <robin.murphy@....com>,
"dwmw2@...radead.org" <dwmw2@...radead.org>,
"baolu.lu@...ux.intel.com" <baolu.lu@...ux.intel.com>,
"shuah@...nel.org" <shuah@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"eric.auger@...hat.com" <eric.auger@...hat.com>,
"jean-philippe@...aro.org" <jean-philippe@...aro.org>,
"mdf@...nel.org" <mdf@...nel.org>,
"mshavit@...gle.com" <mshavit@...gle.com>,
"shameerali.kolothum.thodi@...wei.com" <shameerali.kolothum.thodi@...wei.com>,
"smostafa@...gle.com" <smostafa@...gle.com>,
"Liu, Yi L" <yi.l.liu@...el.com>
Subject: Re: [PATCH v2 17/19] iommu/arm-smmu-v3: Add
arm_smmu_viommu_cache_invalidate
On Wed, Sep 18, 2024 at 08:10:52AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@...dia.com>
> > Sent: Saturday, September 14, 2024 10:51 PM
> >
> > On Fri, Sep 13, 2024 at 02:33:59AM +0000, Tian, Kevin wrote:
> > > > From: Jason Gunthorpe <jgg@...dia.com>
> > > > Sent: Thursday, September 12, 2024 7:08 AM
> > > >
> > > > On Wed, Sep 11, 2024 at 08:13:01AM +0000, Tian, Kevin wrote:
> > > >
> > > > > Probably there is a good reason e.g. for simplification or better
> > > > > aligned with hw accel stuff. But it's not explained clearly so far.
> > > >
> > > > Probably the most concrete thing is if you have a direct assignment
> > > > invalidation queue (ie DMA'd directly by HW) then it only applies to a
> > > > single pIOMMU and invalidation commands placed there are unavoidably
> > > > limited in scope.
> > > >
> > > > This creates a representation problem, if we have a vIOMMU that spans
> > > > many pIOMMUs but invalidations do some subset how to do we model
> > > > that. Just saying the vIOMMU is linked to the pIOMMU solves this
> > > > nicely.
> > > >
> > >
> > > yes that is a good reason.
> > >
> > > btw do we expect the VMM to try-and-fail when deciding whether a
> > > new vIOMMU object is required when creating a new vdev?
> >
> > I think there was some suggestion the getinfo could return this, but
> > also I think qemu needs to have a command line that matches physical
> > so maybe it needs some sysfs?
> >
>
> My impression was that Qemu is moving away from directly accessing
> sysfs (e.g. as the reason behind allowing Libvirt to pass in an opened
> cdev fd to Qemu). So probably getinfo makes more sense...
Yes, but I think libvirt needs this information before it invokes
qemu..
The physical and virtual iommus need to sort of match, something
should figure this out automatically I would guess.
Jason
Powered by blists - more mailing lists