lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251227175728.4358-1-dmaluka@chromium.org>
Date: Sat, 27 Dec 2025 18:57:23 +0100
From: Dmytro Maluka <dmaluka@...omium.org>
To: David Woodhouse <dwmw2@...radead.org>,
	Lu Baolu <baolu.lu@...ux.intel.com>,
	iommu@...ts.linux.dev
Cc: Joerg Roedel <joro@...tes.org>,
	Will Deacon <will@...nel.org>,
	Robin Murphy <robin.murphy@....com>,
	linux-kernel@...r.kernel.org,
	"Vineeth Pillai (Google)" <vineeth@...byteword.org>,
	Aashish Sharma <aashish@...hishsharma.net>,
	Grzegorz Jaszczyk <jaszczyk@...omium.org>,
	Chuanxiao Dong <chuanxiao.dong@...el.com>,
	Kevin Tian <kevin.tian@...el.com>,
	Dmytro Maluka <dmaluka@...omium.org>
Subject: [PATCH v2 0/5] iommu/vt-d: Ensure memory ordering in context & root entry updates

As discussed in [1], we don't currently prevent the compiler from
reordering memory writes when updating context entries, which is
potentially dangerous, as it may cause setting the present bit (i.e.
enabling DMA translation for the given device) before finishing setting
up other bits in the context entry (and thus creating a time window when
a DMA from the device may result in an unpredicted behavior).

Fix this in the same way as how this is already addressed for PASID
entries, i.e. by using READ_ONCE/WRITE_ONCE in the helpers used for
setting individual bits in context entries, so that memory writes done
by those helpers are ordered in relation to each other (plus, prevent
load/store tearing and so on).

While at it, similarly paranoidally fix updating root entries as well:
use WRITE_ONCE to make sure that the present bit is set atomically
together with the context table address bits, not before them.

[1] https://lore.kernel.org/all/aTG7gc7I5wExai3S@google.com/

v1 -> v2:
- Sanitize bits to not exceed the mask (suggested by Baolu)
- Reuse pasid_set_bits() for context entries as well (rename it to
  entry_set_bits())
- Add extra barrier in *_set_present() (suggested by Baolu)

Dmytro Maluka (5):
  iommu/vt-d: Sanitize set bits in pasid_set_bits()
  iommu/vt-d: Generalize pasid_set_bits()
  iommu/vt-d: Ensure memory ordering in context entry updates
  iommu/vt-d: Use smp_wmb() before setting context/pasid present bit
  iommu/vt-d: Use WRITE_ONCE for setting root table entries

 drivers/iommu/intel/iommu.c |  2 +-
 drivers/iommu/intel/iommu.h | 49 +++++++++++++++++++++++++------------
 drivers/iommu/intel/pasid.c |  3 ++-
 drivers/iommu/intel/pasid.h | 46 +++++++++++++++++-----------------
 4 files changed, 59 insertions(+), 41 deletions(-)

-- 
2.47.3


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ