[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zr6iPtiZ/afonJ5k@Asurada-Nvidia>
Date: Thu, 15 Aug 2024 17:50:06 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: <kevin.tian@...el.com>, <will@...nel.org>, <joro@...tes.org>,
<suravee.suthikulpanit@....com>, <robin.murphy@....com>,
<dwmw2@...radead.org>, <baolu.lu@...ux.intel.com>, <shuah@...nel.org>,
<linux-kernel@...r.kernel.org>, <iommu@...ts.linux.dev>,
<linux-arm-kernel@...ts.infradead.org>, <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH v1 15/16] iommu/arm-smmu-v3: Add viommu cache
invalidation support
On Thu, Aug 15, 2024 at 08:36:35PM -0300, Jason Gunthorpe wrote:
> On Wed, Aug 07, 2024 at 01:10:56PM -0700, Nicolin Chen wrote:
> > +static int arm_smmu_convert_viommu_vdev_id(struct iommufd_viommu *viommu,
> > + u32 vdev_id, u32 *sid)
> > +{
> > + struct arm_smmu_master *master;
> > + struct device *dev;
> > +
> > + dev = iommufd_viommu_find_device(viommu, vdev_id);
> > + if (!dev)
> > + return -EIO;
> > + master = dev_iommu_priv_get(dev);
> > +
> > + if (sid)
> > + *sid = master->streams[0].id;
>
> See this is the thing that needs to be locked right
>
> xa_lock()
> dev = xa_load()
> master = dev_iommu_priv_get(dev);
> *sid = master->streams[0].id;
> xa_unlock()
>
> Then no risk of dev going away under us.
Yea, I figured that out.
Though only driver would know whether it would eventually access
the vdev_id list, I'd like to keep things in the way of having a
core-managed VIOMMU object (IOMMU_VIOMMU_TYPE_DEFAULT), so the
viommu invalidation handler could have a lock at its top level to
protect any potential access to the vdev_id list.
> > @@ -3249,6 +3266,19 @@ arm_smmu_convert_user_cmd(struct arm_smmu_domain *s2_parent,
> > cmd->cmd[0] &= ~CMDQ_TLBI_0_VMID;
> > cmd->cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, vmid);
> > break;
> > + case CMDQ_OP_ATC_INV:
> > + case CMDQ_OP_CFGI_CD:
> > + case CMDQ_OP_CFGI_CD_ALL:
>
> Oh, I didn't catch on that CD was needing this too.. :\
Well, viommu cache has a very wide range :)
> That makes the other op much more useless than I expected. I really
> wanted to break these two series apart.
HWPT invalidate and VIOMMU invalidate are somewhat duplicated in
both concept and implementation for SMMUv3. It's not a problem to
have both but practically I can't think of the reason why VMM not
simply stick to the wider VIOMMU invalidate uAPI alone..
> Maybe we need to drop the hwpt invalidation from the other series and
Yea, the hwpt invalidate is just one patch in your series, it's
easy to move if we want to.
> aim to merge this all together through the iommufd tree.
I have been hoping for that, as you can see those driver patches
are included here :)
And there will be another two series that I'd like to go through
the IOMMUFD tree as well:
VIOMMU part-1 (ALLOC/SET_VDEV_ID/INVALIDATE) + smmu user cache invalidate
VIOMMU part-2 (VIRQ) + smmu virtual IRQ handling
VIOMMU part-3 (VQUEUE) + CMDQV user-space support
Thanks
Nicolin
Powered by blists - more mailing lists