lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aD42rwMoJ0gh5VBy@google.com>
Date: Mon, 2 Jun 2025 16:41:35 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org, 
	Borislav Petkov <bp@...en8.de>, Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation

On Wed, May 28, 2025, Pawan Gupta wrote:
> On Thu, May 22, 2025 at 06:17:51PM -0700, Sean Christopherson wrote:
> > Fix KVM's mitigation of the MMIO Stale Data bug, as the current approach
> > doesn't actually detect whether or not a guest has access to MMIO.  E.g.
> > KVM_DEV_VFIO_FILE_ADD is entirely optional, and obviously only covers VFIO
> 
> I believe this needs userspace co-operation?

Yes, more or less.  If the userspace VMM knows it doesn't need to trigger the
side effects of KVM_DEV_VFIO_FILE_ADD (e.g. isn't dealing with non-coherent DMA),
and doesn't need the VFIO<=>KVM binding (e.g. for KVM-GT), then AFAIK it's safe
to skip KVM_DEV_VFIO_FILE_ADD, modulo this mitigation.

> > devices, and so is a terrible heuristic for "can this vCPU access MMIO?"
> > 
> > To fix the flaw (hopefully), track whether or not a vCPU has access to MMIO
> > based on the MMU it will run with.  KVM already detects host MMIO when
> > installing PTEs in order to force host MMIO to UC (EPT bypasses MTRRs), so
> > feeding that information into the MMU is rather straightforward.
> > 
> > Note, I haven't actually verified this mitigates the MMIO Stale Data bug, but
> > I think it's safe to say no has verified the existing code works either.
> 
> Mitigation was verifed for VFIO devices, but ofcourse not for the cases you
> mentioned above. Typically, it is the PCI config registers on some faulty
> devices (that don't respect byte-enable) are subject to MMIO Stale Data.
>
> But, it is impossible to test and confirm with absolute certainity that all

Yeah, no argument there.  

> other cases are not affected. Your patches should rule out those cases as
> well.
> 
> Regarding validating this, if VERW is executed at VMenter, mitigation was
> found to be effective. This is similar to other bugs like MDS. I am not a
> virtualization expert, but I will try to validate whatever I can.

If you can re-verify the mitigation works for VFIO devices, that's more than
good enough for me.  The bar at this point is to not regress the existing mitigation,
anything beyond that is gravy.

I've verified the KVM mechanics of tracing MMIO mappings fairly well (famous last
words), the only thing I haven't sanity checked is that the existing coverage for
VFIO devices is maintained.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ