[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZlY/kxg0VJf7olXr@Asurada-Nvidia>
Date: Tue, 28 May 2024 13:33:23 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: "Tian, Kevin" <kevin.tian@...el.com>
CC: Jason Gunthorpe <jgg@...dia.com>, "will@...nel.org" <will@...nel.org>,
"robin.murphy@....com" <robin.murphy@....com>,
"suravee.suthikulpanit@....com" <suravee.suthikulpanit@....com>,
"joro@...tes.org" <joro@...tes.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "iommu@...ts.linux.dev"
<iommu@...ts.linux.dev>, "linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, "linux-tegra@...r.kernel.org"
<linux-tegra@...r.kernel.org>, "Liu, Yi L" <yi.l.liu@...el.com>,
"eric.auger@...hat.com" <eric.auger@...hat.com>, "vasant.hegde@....com"
<vasant.hegde@....com>, "jon.grimm@....com" <jon.grimm@....com>,
"santosh.shukla@....com" <santosh.shukla@....com>, "Dhaval.Giani@....com"
<Dhaval.Giani@....com>, "shameerali.kolothum.thodi@...wei.com"
<shameerali.kolothum.thodi@...wei.com>
Subject: Re: [PATCH RFCv1 08/14] iommufd: Add IOMMU_VIOMMU_SET_DEV_ID ioctl
On Tue, May 28, 2024 at 01:22:46PM -0700, Nicolin Chen wrote:
> On Mon, May 27, 2024 at 01:08:43AM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@...dia.com>
> > > Sent: Friday, May 24, 2024 9:19 PM
> > >
> > > On Fri, May 24, 2024 at 07:13:23AM +0000, Tian, Kevin wrote:
> > > > I'm curious to learn the real reason of that design. Is it because you
> > > > want to do certain load-balance between viommu's or due to other
> > > > reasons in the kernel smmuv3 driver which e.g. cannot support a
> > > > viommu spanning multiple pSMMU?
> > >
> > > Yeah, there is no concept of support for a SMMUv3 instance where it's
> > > command Q's can only work on a subset of devices.
> > >
> > > My expectation was that VIOMMU would be 1:1 with physical iommu
> > > instances, I think AMD needs this too??
> > >
> >
> > Yes this part is clear now regarding to VCMDQ.
> >
> > But Nicoline said:
> >
> > "
> > One step back, even without VCMDQ feature, a multi-pSMMU setup
> > will have multiple viommus (with our latest design) being added
> > to a viommu list of a single vSMMU's. Yet, vSMMU in this case
> > always traps regular SMMU CMDQ, so it can do viommu selection
> > or even broadcast (if it has to).
> > "
> >
> > I don't think there is an arch limitation mandating that?
>
> What I mean is for regular vSMMU. Without VCMDQ, a regular vSMMU
> on a multi-pSMMU setup will look like (e.g. three devices behind
> different SMMUs):
> |<------ VMM ------->|<------ kernel ------>|
> |-- viommu0 --|-- pSMMU0 --|
> vSMMU--|-- viommu1 --|-- pSMMU1 --|--s2_hwpt
> |-- viommu2 --|-- pSMMU2 --|
>
> And device would attach to:
> |<---- guest ---->|<--- VMM --->|<- kernel ->|
> |-- dev0 --|-- viommu0 --|-- pSMMU0 --|
> vSMMU--|-- dev1 --|-- viommu1 --|-- pSMMU1 --|
> |-- dev2 --|-- viommu2 --|-- pSMMU2 --|
I accidentally sent a duplicated one.. Please ignore this reply
and check the other one. Thanks!
Powered by blists - more mailing lists