[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN9PR11MB5276E8767AB63378C81130528CD62@BN9PR11MB5276.namprd11.prod.outlook.com>
Date: Wed, 26 Jun 2024 06:53:04 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>, Joerg Roedel <joro@...tes.org>, "Will
Deacon" <will@...nel.org>, Robin Murphy <robin.murphy@....com>, "Jason
Gunthorpe" <jgg@...pe.ca>
CC: "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] iommu/vt-d: Refactor PCI PRI enabling/disabling callbacks
> From: Lu Baolu <baolu.lu@...ux.intel.com>
> Sent: Thursday, June 6, 2024 11:40 AM
>
> +/*
> + * Invalidate the caches for a present-to-present change in a context
> + * table entry according to the Spec 6.5.3.3 (Guidance to Software for
> + * Invalidations).
> + *
> + * Since context entry is not encoded by domain-id when operating in
> + * scalable-mode (refer Section 6.2.1), this performs coarser
> + * invalidation than the domain-selective granularity requested.
> + */
> +static void invalidate_present_context_change(struct device_domain_info
> *info)
> +{
> + struct intel_iommu *iommu = info->iommu;
> +
> + iommu->flush.flush_context(iommu, 0, 0, 0,
> DMA_CCMD_GLOBAL_INVL);
> + if (sm_supported(iommu))
> + qi_flush_pasid_cache(iommu, 0, QI_PC_GLOBAL, 0);
> + iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
> + __iommu_flush_dev_iotlb(info, 0, MAX_AGAW_PFN_WIDTH);
> +}
> +
this invalidates the entire cache/iotlb for all devices behind this
iommu just due to a PRI enable/disable operation on a single
device.
No that's way too much. If there is a burden to identify all active
DIDs used by this device then pay it and penalize only that device.
btw in concept PRI will not be enabled/disabled when there are
PASIDs of this device being actively attached. So at this point
there should only be RID with attached domain then we only
need to find that DID out and use it to invalidate related caches.
Powered by blists - more mailing lists