lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBKzPyqNTwogNLln@google.com>
Date: Wed, 30 Apr 2025 16:33:19 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Borislav Petkov <bp@...en8.de>
Cc: Yosry Ahmed <yosry.ahmed@...ux.dev>, Patrick Bellasi <derkling@...gle.com>, 
	Paolo Bonzini <pbonzini@...hat.com>, Josh Poimboeuf <jpoimboe@...hat.com>, 
	Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>, x86@...nel.org, kvm@...r.kernel.org, 
	linux-kernel@...r.kernel.org, Patrick Bellasi <derkling@...bug.net>, 
	Brendan Jackman <jackmanb@...gle.com>, David Kaplan <David.Kaplan@....com>, 
	Michael Larabel <Michael@...haellarabel.com>
Subject: Re: x86/bugs: KVM: Add support for SRSO_MSR_FIX, back for moar

On Tue, Apr 29, 2025, Borislav Petkov wrote:
> On Tue, Feb 18, 2025 at 12:13:33PM +0100, Borislav Petkov wrote:
> > So,
> > 
> > in the interest of finally making some progress here I'd like to commit this
> > below (will test it one more time just in case but it should work :-P). It is
> > simple and straight-forward and doesn't need an IBPB when the bit gets
> > cleared.
> > 
> > A potential future improvement is David's suggestion that there could be a way
> > for tracking when the first guest gets started, we set the bit then, we make
> > sure the bit gets set on each logical CPU when the guests migrate across the
> > machine and when the *last* guest exists, that bit gets cleared again.
> 
> Well, that "simplicity" was short-lived:
> 
> https://www.phoronix.com/review/linux-615-amd-regression

LOL.

> Sean, how about this below?

Eww.  That's quite painful, and completely disallowing enable_virt_on_load is
undesirable, e.g. for use cases where the host is (almost) exclusively running
VMs.

Best idea I have is to throw in the towel on getting fancy, and just maintain a
dedicated count in SVM.

Alternatively, we could plumb an arch hook into kvm_create_vm() and kvm_destroy_vm()
that's called when KVM adds/deletes a VM from vm_list, and key off vm_list being
empty.  But that adds a lot of boilerplate just to avoid a mutex+count.

I haven't tested on a system with X86_FEATURE_SRSO_BP_SPEC_REDUCE, but did verify
the mechanics by inverting the flag.

--
From: Sean Christopherson <seanjc@...gle.com>
Date: Wed, 30 Apr 2025 15:34:50 -0700
Subject: [PATCH] KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count
 transitions

Set the magic BP_SPEC_REDUCE bit to mitigate SRSO when running VMs if and
only if KVM has at least one active VM.  Leaving the bit set at all times
unfortunately degrades performance by a wee bit more than expected.

Use a dedicated mutex and counter instead of hooking virtualization
enablement, as changing the behavior of kvm.enable_virt_at_load based on
SRSO_BP_SPEC_REDUCE is painful, and has its own drawbacks, e.g. could
result in performance issues for flows that are sensity to VM creation
latency.

Fixes: 8442df2b49ed ("x86/bugs: KVM: Add support for SRSO_MSR_FIX")
Reported-by: Michael Larabel <Michael@...haellarabel.com>
Closes: https://www.phoronix.com/review/linux-615-amd-regression
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
 arch/x86/kvm/svm/svm.c | 39 +++++++++++++++++++++++++++++++++------
 1 file changed, 33 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index d5d0c5c3300b..fe8866572218 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -607,9 +607,6 @@ static void svm_disable_virtualization_cpu(void)
 	kvm_cpu_svm_disable();
 
 	amd_pmu_disable_virt();
-
-	if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE))
-		msr_clear_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
 }
 
 static int svm_enable_virtualization_cpu(void)
@@ -687,9 +684,6 @@ static int svm_enable_virtualization_cpu(void)
 		rdmsr(MSR_TSC_AUX, sev_es_host_save_area(sd)->tsc_aux, msr_hi);
 	}
 
-	if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE))
-		msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
-
 	return 0;
 }
 
@@ -5032,10 +5026,42 @@ static void svm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
 	sev_vcpu_deliver_sipi_vector(vcpu, vector);
 }
 
+static DEFINE_MUTEX(srso_lock);
+static int srso_nr_vms;
+
+static void svm_toggle_srso_spec_reduce(void *set)
+{
+	if (set)
+		msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
+	else
+		msr_clear_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
+}
+
+static void svm_srso_add_remove_vm(int count)
+{
+	bool set;
+
+	if (!cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE))
+		return;
+
+	guard(mutex)(&srso_lock);
+
+	set = !srso_nr_vms;
+	srso_nr_vms += count;
+
+	WARN_ON_ONCE(srso_nr_vms < 0);
+	if (!set && srso_nr_vms)
+		return;
+
+	on_each_cpu(svm_toggle_srso_spec_reduce, (void *)set, 1);
+}
+
 static void svm_vm_destroy(struct kvm *kvm)
 {
 	avic_vm_destroy(kvm);
 	sev_vm_destroy(kvm);
+
+	svm_srso_add_remove_vm(-1);
 }
 
 static int svm_vm_init(struct kvm *kvm)
@@ -5061,6 +5087,7 @@ static int svm_vm_init(struct kvm *kvm)
 			return ret;
 	}
 
+	svm_srso_add_remove_vm(1);
 	return 0;
 }
 

base-commit: f158e1b145f73aae1d3b7e756eb129a15b2b7a90
--

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ