lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <SI2PR01MB439358422CCAABADBEB21D7CDCF0A@SI2PR01MB4393.apcprd01.prod.exchangelabs.com>
Date: Thu, 23 Oct 2025 23:15:43 +0800
From: Wei Wang <wei.w.wang@...mail.com>
To: suravee.suthikulpanit@....com,
	thomas.lendacky@....com,
	jroedel@...e.de
Cc: kevin.tian@...el.com,
	jgg@...dia.com,
	linux-kernel@...r.kernel.org,
	iommu@...ts.linux.dev,
	Wei Wang <wei.w.wang@...mail.com>
Subject: [PATCH v1] iommu/amd: Set C-bit only for RAM-backed PTEs in IOMMU page tables

When SME is enabled, iommu_v1_map_pages() currently sets the C-bit for
all physical addresses. This is correct for RAM, since the C-bit is
required by SME to indicate encrypted memory and ensure proper
encryption/decryption.

However, applying the C-bit to MMIO addresses is incorrect. Devices and
PCIe switches do not interpret the C-bit currently, and doing so can break
PCIe peer-to-peer communication. To avoid this, only set the C-bit when
the physical address is backed by RAM, and leave MMIO mappings unchanged.

Fixes: 2543a786aa25 ("iommu/amd: Allow the AMD IOMMU to work with memory encryption")
Signed-off-by: Wei Wang <wei.w.wang@...mail.com>
---
 drivers/iommu/amd/io_pgtable.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index 70c2f5b1631b..6f395940d0a4 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -353,6 +353,9 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
 	if (!(prot & IOMMU_PROT_MASK))
 		goto out;
 
+	if (sme_me_mask && page_is_ram(PHYS_PFN(paddr)))
+		paddr = __sme_set(paddr);
+
 	while (pgcount > 0) {
 		count = PAGE_SIZE_PTE_COUNT(pgsize);
 		pte   = alloc_pte(pgtable, iova, pgsize, NULL, gfp, &updated);
@@ -368,10 +371,10 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
 			updated = true;
 
 		if (count > 1) {
-			__pte = PAGE_SIZE_PTE(__sme_set(paddr), pgsize);
+			__pte = PAGE_SIZE_PTE(paddr, pgsize);
 			__pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_PR | IOMMU_PTE_FC;
 		} else
-			__pte = __sme_set(paddr) | IOMMU_PTE_PR | IOMMU_PTE_FC;
+			__pte = paddr | IOMMU_PTE_PR | IOMMU_PTE_FC;
 
 		if (prot & IOMMU_PROT_IR)
 			__pte |= IOMMU_PTE_IR;
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ