[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260205140059.11857-2-magnus@dolphinics.com>
Date: Thu, 5 Feb 2026 15:01:00 +0100
From: Magnus Kalland <magnus@...phinics.com>
To: joro@...tes.org
Cc: iommu@...ts.linux.dev,
jonas@...phinics.com,
larsk@...phinics.com,
linux-kernel@...r.kernel.org,
magnus@...phinics.com,
suravee.suthikulpanit@....com,
torel@...ula.no,
vasant.hegde@....com
Subject: [PATCH v2] iommu/amd: Invalidate IRT cache for DMA aliases
DMA aliasing causes interrupt remapping table entries (IRTEs) to be shared
between multiple device IDs. See commit 3c124435e8dd
("iommu/amd: Support multiple PCI DMA aliases in IRQ Remapping") for more
information on this. However, the AMD IOMMU driver currently invalidates
IRTE cache entries on a per-device basis whenever an IRTE is updated, not
for each alias.
This approach leaves stale IRTE cache entries when an IRTE is cached under
one DMA alias but later updated and invalidated through a different alias.
In such cases, the original device ID is never invalidated, since it is
programmed via aliasing.
This incoherency bug has been observed when IRTEs are cached for one
Non-Transparent Bridge (NTB) DMA alias, later updated via another.
Fix this by invalidating the interrupt remapping table cache for all DMA
aliases when updating an IRTE.
Link: https://lore.kernel.org/linux-iommu/fwtqfdk3m7qrazj4bfutl4grac46agtxztc3p2lqnejt2wyexu@lztyomxrm3pk/
Signed-off-by: Magnus Kalland <magnus@...phinics.com>
---
v2:
- Move the lock acquire before branching
- Call iommu_flush_dev_irt() when pdev is null
- Handle pdev refcount in correct branch
drivers/iommu/amd/iommu.c | 32 +++++++++++++++++++++++++++-----
1 file changed, 27 insertions(+), 5 deletions(-)
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 2e1865daa1ce..b5256b28b0c8 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -3077,25 +3077,47 @@ const struct iommu_ops amd_iommu_ops = {
static struct irq_chip amd_ir_chip;
static DEFINE_SPINLOCK(iommu_table_lock);
+static int iommu_flush_dev_irt(struct pci_dev *unused, u16 devid, void *data)
+{
+ int ret;
+ struct iommu_cmd cmd;
+ struct amd_iommu *iommu = data;
+
+ build_inv_irt(&cmd, devid);
+ ret = __iommu_queue_command_sync(iommu, &cmd, true);
+ return ret;
+}
+
static void iommu_flush_irt_and_complete(struct amd_iommu *iommu, u16 devid)
{
int ret;
u64 data;
+ int domain = iommu->pci_seg->id;
+ unsigned int bus = PCI_BUS_NUM(devid);
+ unsigned int devfn = devid & 0xff;
unsigned long flags;
- struct iommu_cmd cmd, cmd2;
+ struct iommu_cmd cmd;
+ struct pci_dev *pdev = NULL;
if (iommu->irtcachedis_enabled)
return;
- build_inv_irt(&cmd, devid);
data = atomic64_inc_return(&iommu->cmd_sem_val);
- build_completion_wait(&cmd2, iommu, data);
+ build_completion_wait(&cmd, iommu, data);
+ pdev = pci_get_domain_bus_and_slot(domain, bus, devfn);
raw_spin_lock_irqsave(&iommu->lock, flags);
- ret = __iommu_queue_command_sync(iommu, &cmd, true);
+ if (pdev) {
+ ret = pci_for_each_dma_alias(pdev, iommu_flush_dev_irt, iommu);
+ pci_dev_put(pdev);
+ } else {
+ ret = iommu_flush_dev_irt(NULL, devid, iommu);
+ }
+
if (ret)
goto out;
- ret = __iommu_queue_command_sync(iommu, &cmd2, false);
+
+ ret = __iommu_queue_command_sync(iommu, &cmd, false);
if (ret)
goto out;
wait_on_sem(iommu, data);
--
2.43.0
Powered by blists - more mailing lists