lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251227175728.4358-5-dmaluka@chromium.org>
Date: Sat, 27 Dec 2025 18:57:27 +0100
From: Dmytro Maluka <dmaluka@...omium.org>
To: David Woodhouse <dwmw2@...radead.org>,
	Lu Baolu <baolu.lu@...ux.intel.com>,
	iommu@...ts.linux.dev
Cc: Joerg Roedel <joro@...tes.org>,
	Will Deacon <will@...nel.org>,
	Robin Murphy <robin.murphy@....com>,
	linux-kernel@...r.kernel.org,
	"Vineeth Pillai (Google)" <vineeth@...byteword.org>,
	Aashish Sharma <aashish@...hishsharma.net>,
	Grzegorz Jaszczyk <jaszczyk@...omium.org>,
	Chuanxiao Dong <chuanxiao.dong@...el.com>,
	Kevin Tian <kevin.tian@...el.com>,
	Dmytro Maluka <dmaluka@...omium.org>
Subject: [PATCH v2 4/5] iommu/vt-d: Use smp_wmb() before setting context/pasid present bit

With the previous patch we already ensure that the present bit in
context or PASID entries is not set earlier than setting/clearing other
needed bits (assuming that setting/clearing any bits in them is always
done via WRITE_ONCE, which is enough to ensure ordering between them on
x86).

However, it also doesn't hurt to add an explicit smp_wmb() barrier
(which on x86 is merely a compiler barrier) before setting the present
bit, as an extra safety measure in case we still forget to use
WRITE_ONCE when updating any other bits in context/PASID entries in the
future, plus for documentation purposes.

Suggested-by: Lu Baolu <baolu.lu@...ux.intel.com>
Signed-off-by: Dmytro Maluka <dmaluka@...omium.org>
---
 drivers/iommu/intel/iommu.h | 10 ++++++++++
 drivers/iommu/intel/pasid.h |  6 ++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
index 5bc69ffc7c8e..75576885314b 100644
--- a/drivers/iommu/intel/iommu.h
+++ b/drivers/iommu/intel/iommu.h
@@ -909,6 +909,16 @@ static inline void entry_set_bits(u64 *ptr, u64 mask, u64 bits)
 
 static inline void context_set_present(struct context_entry *context)
 {
+	/*
+	 * Make sure to not set the present bit earlier than updating other
+	 * bits.
+	 *
+	 * This barrier may be redundant, but only as long as any context
+	 * entry modifications use WRITE_ONCE(), which is enough to ensure
+	 * ordering between them on x86 hardware.
+	 */
+	smp_wmb();
+
 	entry_set_bits(&context->lo, 1ULL << 0, 1ULL);
 }
 
diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
index f8fc73676192..0d50959a0495 100644
--- a/drivers/iommu/intel/pasid.h
+++ b/drivers/iommu/intel/pasid.h
@@ -228,6 +228,12 @@ static inline void pasid_set_wpe(struct pasid_entry *pe)
  */
 static inline void pasid_set_present(struct pasid_entry *pe)
 {
+	/*
+	 * Make sure to not set the present bit earlier than updating other
+	 * bits. See also the comment in context_set_present().
+	 */
+	smp_wmb();
+
 	entry_set_bits(&pe->val[0], 1 << 0, 1);
 }
 
-- 
2.47.3


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ