[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZBr3/1NzY5VvJrJQ@nvidia.com>
Date: Wed, 22 Mar 2023 09:43:43 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: "Tian, Kevin" <kevin.tian@...el.com>,
Robin Murphy <robin.murphy@....com>,
"will@...nel.org" <will@...nel.org>,
"eric.auger@...hat.com" <eric.auger@...hat.com>,
"baolu.lu@...ux.intel.com" <baolu.lu@...ux.intel.com>,
"joro@...tes.org" <joro@...tes.org>,
"shameerali.kolothum.thodi@...wei.com"
<shameerali.kolothum.thodi@...wei.com>,
"jean-philippe@...aro.org" <jean-philippe@...aro.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1 14/14] iommu/arm-smmu-v3: Add
arm_smmu_cache_invalidate_user
On Tue, Mar 21, 2023 at 11:42:25PM -0700, Nicolin Chen wrote:
> On Tue, Mar 21, 2023 at 08:48:31AM -0300, Jason Gunthorpe wrote:
> > On Tue, Mar 21, 2023 at 08:34:00AM +0000, Tian, Kevin wrote:
> >
> > > > > Rephrasing that to put into a design: the IOCTL would pass a
> > > > > user pointer to the queue, the size of the queue, then a head
> > > > > pointer and a tail pointer? Then the kernel reads out all the
> > > > > commands between the head and the tail and handles all those
> > > > > invalidation commands only?
> > > >
> > > > Yes, that is one possible design
> > >
> > > If we cannot have the short path in the kernel then I'm not sure the
> > > value of using native format and queue in the uAPI. Batching can
> > > be enabled over any format.
> >
> > SMMUv3 will have a hardware short path where the HW itself runs the
> > VM's command queue and does this logic.
> >
> > So I like the symmetry of the SW path being close to that.
>
> A tricky thing here that I just realized:
>
> With VCMDQ, the guest will have two CMDQs. One is the vSMMU's
> CMDQ handling all non-TLBI commands like CMD_CFGI_STE via the
> invalidation IOCTL, and the other hardware accelerated VCMDQ
> handling all TLBI commands by the HW. In this setup, we will
> need a VCMDQ kernel driver to dispatch commands into the two
> different queues.
You mean a VM kernel driver? Yes that was always the point, the VM
would use the extra CMDQ's only for invalidation
The main CMDQ would work as today through a trap.
> Yet, it feels a bit different with this SW path exposing the
> entire SMMU CMDQ, since now theoretically non-TLBI and TLBI
> commands can be interlaced in one batch, so the hypervisor
> should go through the queue first to handle and delete all
> non-TLBI commands, and then forward the CMDQ to the host to
> run remaining TLBI commands, if there's any.
Yes, there are a few different ways to handle this and still preserve
batching. It is part of the reason it would be hard to make the kernel
natively parse the commandq
On the other hand, we could add some more native kernel support for a
SW emulated vCMDQ and that might be interesting for performance.
One of the biggest reasons to use nesting is to get to vSVA and
invalidation performance is very important in a vSVA environment. We
should not ignore this in the design.
> > If the VMID is tied to the entire iommufd_ctx then it can flow
> > independently.
>
> One more thing about the VMID unification is that SMMU might
> have limitation on the VMID range:
> smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8;
> ...
> vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
>
> So, we'd likely need a CAP for that, to apply some limitation
> with the iommufd_ctx too?
I'd imagine the driver would have to allocate its internal data
against the iommufd_ctx
I'm not sure how best to organize that if it is the way to go.
Do we have a use case for more than one S2 iommu_domain on ARM?
Jason
Powered by blists - more mailing lists