lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250607025213.o226wig3qtt5spv2@desk>
Date: Fri, 6 Jun 2025 19:52:13 -0700
From: Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, Borislav Petkov <bp@...en8.de>,
	Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation

On Mon, Jun 02, 2025 at 06:22:08PM -0700, Pawan Gupta wrote:
> On Mon, Jun 02, 2025 at 04:41:35PM -0700, Sean Christopherson wrote:
> > > Regarding validating this, if VERW is executed at VMenter, mitigation was
> > > found to be effective. This is similar to other bugs like MDS. I am not a
> > > virtualization expert, but I will try to validate whatever I can.
> > 
> > If you can re-verify the mitigation works for VFIO devices, that's more than
> > good enough for me.  The bar at this point is to not regress the existing mitigation,
> > anything beyond that is gravy.
> 
> Ok sure. I'll verify that VERW is getting executed for VFIO devices.

I have verified that with below patches CPU buffer clearing for MMIO Stale
Data is working as expected for VFIO device.

  KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest
  KVM: x86/mmu: Locally cache whether a PFN is host MMIO when making a SPTE
  KVM: x86: Avoid calling kvm_is_mmio_pfn() when kvm_x86_ops.get_mt_mask is NULL

For the above patches:

Tested-by: Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>

Below are excerpts from the logs with debug prints added:

# virsh start ubuntu24.04                                                      <------ Guest launched
[ 5737.281649] virbr0: port 1(vnet1) entered blocking state
[ 5737.281659] virbr0: port 1(vnet1) entered disabled state
[ 5737.281686] vnet1: entered allmulticast mode
[ 5737.281775] vnet1: entered promiscuous mode
[ 5737.282026] virbr0: port 1(vnet1) entered blocking state
[ 5737.282032] virbr0: port 1(vnet1) entered listening state
[ 5737.775162] vmx_vcpu_enter_exit: 13085 callbacks suppressed
[ 5737.775169] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO  <----- Buffers not cleared
[ 5737.775192] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO
[ 5737.775203] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO
...
Domain 'ubuntu24.04' started

[ 5739.323529] virbr0: port 1(vnet1) entered learning state
[ 5741.372527] virbr0: port 1(vnet1) entered forwarding state
[ 5741.372540] virbr0: topology change detected, propagating
[ 5742.906218] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO
[ 5742.906232] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO
[ 5742.906234] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO
[ 5747.906515] vmx_vcpu_enter_exit: 267825 callbacks suppressed
...

# virsh attach-device ubuntu24.04 vfio.xml  --live                            <----- Device attached

[ 5749.913996] ioatdma 0000:00:01.1: Removing dma and dca services
[ 5750.786112] vfio-pci 0000:00:01.1: resetting
[ 5750.891646] vfio-pci 0000:00:01.1: reset done
[ 5750.900521] vfio-pci 0000:00:01.1: resetting
[ 5751.003645] vfio-pci 0000:00:01.1: reset done
Device attached successfully
[ 5751.074292] kvm_intel: vmx_vcpu_enter_exit: CPU buffer cleared for MMIO    <----- Buffers getting cleared
[ 5751.074293] kvm_intel: vmx_vcpu_enter_exit: CPU buffer cleared for MMIO
[ 5751.074294] kvm_intel: vmx_vcpu_enter_exit: CPU buffer cleared for MMIO
[ 5756.076427] vmx_vcpu_enter_exit: 68991 callbacks suppressed

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ