[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c8a8ddbc4b1c53429748d239f2a8461f180a5133.camel@infradead.org>
Date: Tue, 23 Sep 2025 10:02:38 +0100
From: David Woodhouse <dwmw2@...radead.org>
To: Alexander Graf <graf@...zon.com>, Priscilla Lam <prl@...zon.com>,
maz@...nel.org, oliver.upton@...ux.dev
Cc: christoffer.dall@....com, gurugubs@...zon.com, jgrall@...zon.co.uk,
joey.gouly@....com, kvmarm@...ts.linux.dev,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
suzuki.poulose@....com, yuzenghui@...wei.com
Subject: Re: [PATCH] KVM: arm64: Implement KVM_TRANSLATE ioctl for arm64
On Tue, 2025-09-23 at 10:39 +0200, Alexander Graf wrote:
>
> All modern OSs constrain themselves to only ISV enabled MMIO
> instructions.
Why? Such does not appear to be required by the architecture, AFAICT?
And what is "MMIO" architecturally anyway? Using LDP is actually fine
on MMIO, if a PCI BAR is passed through to a guest (and assuming there
are no ordering dependencies on the loads). It's the fact that it's
missing from stage2 page tables and causes a guest exit which is the
problem.
Other than lying to a compiler and *pretending* there are ordering
constraints, or explicitly using inline asm, how would a guest OS
*prevent* the compiler from using problematic instructions?
> NISV is really only useful to run non hypervisor
> enlightened guests. Did you encounter a real world OS that was causing
> NISV faults? And if so, what was causing these? It's most likely a
> driver or OS bug.
The Windows CRB TPM driver uses LDP. Even if it *were* forbidden by the
architecture, the track record of getting Microsoft to fix bugs *even*
when they are egregious violations of both specifications and common
sense is... poor.
Download attachment "smime.p7s" of type "application/pkcs7-signature" (5069 bytes)
Powered by blists - more mailing lists