lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aV0vM9ad-4QhCNaA@mitya-t14-2025>
Date: Tue, 6 Jan 2026 16:50:11 +0100
From: Dmytro Maluka <dmaluka@...omium.org>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: David Woodhouse <dwmw2@...radead.org>,
	Lu Baolu <baolu.lu@...ux.intel.com>, iommu@...ts.linux.dev,
	Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
	Robin Murphy <robin.murphy@....com>, linux-kernel@...r.kernel.org,
	"Vineeth Pillai (Google)" <vineeth@...byteword.org>,
	Aashish Sharma <aashish@...hishsharma.net>,
	Grzegorz Jaszczyk <jaszczyk@...omium.org>,
	Chuanxiao Dong <chuanxiao.dong@...el.com>,
	Kevin Tian <kevin.tian@...el.com>
Subject: Re: [PATCH v2 0/5] iommu/vt-d: Ensure memory ordering in context &
 root entry updates

On Tue, Jan 06, 2026 at 10:23:01AM -0400, Jason Gunthorpe wrote:
> On Tue, Jan 06, 2026 at 02:51:38PM +0100, Dmytro Maluka wrote:
> > Regarding flushing caches right after that - what for? (BTW the Intel
> > driver doesn't do that either.) If we don't do that and as a result the
> > HW is using an old entry cached before we cleared the present bit, it
> > is not affected by our later modifications anyway.
> 
> You don't know what state the HW fetcher is in. This kind of race is possible:
> 
>      CPU                 FETCHER
>                         read present = 1
>     present = 0
>     mangle qword 1
>                         read qword 1
>                         < fail - HW sees a corrupted entry >
> 
> The flush is not just a flush but a barrier to synchronize with the HW
> that it is done all fetches that may have been dependent on seeing
> present = 1.
> 
> So missing a flush after clearing present is possibly a bug today - I
> don't remember what guarenteed the atomic size is for Intel IOMMU
> though, if the atomic size is the whole entry it is OK since there is
> only one fetcher read. Though AMD is 128 bits and ARM is 64 bits.

Indeed, may be a bug... In the VT-d spec I don't immediately see a
guarantee that context and PASID entries are fetched atomically. (And
for PASID entries, which are 512 bits, that seems particularly
unlikely.)

> > I was talking about compiler guarantees, not HW guarantees. I mean: when
> > setting some other bit in the entry before the barrier, if we do that
> > without WRITE_ONCE, with a mere "foo |= bar", are we certain the
> > compiler will not implement that as, for example, setting the value to
> > 0xffffffffffffffff and then clearing other bits (for whatever crazy
> > reason)? That would be still a legal thing for the compiler to do, in
> > terms of its single-thread guarantees?
> 
> The HW doesn't read the values the CPU is writing, so it doesn't
> matter if the compiler does something strange. That is the whole
> justification for why it is possible to code it like this at all.
> 
> The dma_mb() is also a compiler barrier and ensures all that
> uncertainty is resolved. Once it completes a DMA from the HW will see
> the program defined values only.

The HW (the IOMMU) may read context / PASID / page table entries from
memory at any time, whenever a device makes a DMA request so the IOMMU
needs to translate it? We don't know if it happens before or after the
barrier?

So we'd better make sure that if it happens before the barrier (i.e.
when the device is not supposed to do DMA), the compiler (and thus
the CPU) doesn't set the present bit, so it stays non-present, so
the IOMMU will block this unexpected/malicious DMA?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ