[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN9PR11MB5276F7B7E0F6091334A7A3128C43A@BN9PR11MB5276.namprd11.prod.outlook.com>
Date: Thu, 3 Jul 2025 07:16:46 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>, Joerg Roedel <joro@...tes.org>, "Will
Deacon" <will@...nel.org>, Robin Murphy <robin.murphy@....com>, "Ioanna
Alifieraki" <ioanna-maria.alifieraki@...onical.com>
CC: "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>
Subject: RE: [PATCH 1/1] iommu/vt-d: Optimize iotlb_sync_map for
non-caching/non-RWBF modes
> From: Lu Baolu <baolu.lu@...ux.intel.com>
> Sent: Thursday, July 3, 2025 11:16 AM
>
> The iotlb_sync_map iommu ops allows drivers to perform necessary cache
> flushes when new mappings are established. For the Intel iommu driver,
> this callback specifically serves two purposes:
>
> - To flush caches when a second-stage page table is attached to a device
> whose iommu is operating in caching mode (CAP_REG.CM==1).
> - To explicitly flush internal write buffers to ensure updates to memory-
> resident remapping structures are visible to hardware (CAP_REG.RWBF==1).
>
> However, in scenarios where neither caching mode nor the RWBF flag is
> active, the cache_tag_flush_range_np() helper, which is called in the
> iotlb_sync_map path, effectively becomes a no-op.
>
> Despite being a no-op, cache_tag_flush_range_np() involves iterating
> through all cache tags of the iommu's attached to the domain, protected
> by a spinlock. This unnecessary execution path introduces overhead,
> leading to a measurable I/O performance regression. On systems with
> NVMes
> under the same bridge, performance was observed to drop from
> approximately
> ~6150 MiB/s down to ~4985 MiB/s.
so for the same bridge case two NVMe disks likely are in the same
iommu group sharing a domain. Then there is contention on the
spinlock from two parallel threads on two disks. when disks come
from different bridges they are attached to different domains hence
no contention.
is it a correct description for the difference between same vs.
different bridge?
> @@ -1833,6 +1845,8 @@ static int dmar_domain_attach_device(struct
> dmar_domain *domain,
> if (ret)
> goto out_block_translation;
>
> + domain->iotlb_sync_map |= domain_need_iotlb_sync_map(domain,
> iommu);
> +
> return 0;
>
also need to update the flag upon detach.
with it:
Reviewed-by: Kevin Tian <kevin.tian@...el.com>
Powered by blists - more mailing lists