[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250131010601.469904-1-seanjc@google.com>
Date: Thu, 30 Jan 2025 17:06:01 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Maxim Levitsky <mlevitsk@...hat.com>
Subject: [PATCH] KVM: nSVM: Never use L0's PAUSE loop exiting while L2 is running
Never use L0's (KVM's) PAUSE loop exiting controls while L2 is running,
and instead always configure vmcb02 according to L1's exact capabilities
and desires.
The purpose of intercepting PAUSE after N attempts is to detect when the
vCPU may be stuck waiting on a lock, so that KVM can schedule in a
different vCPU that may be holding said lock. Barring a very interesting
setup, L1 and L2 do not share locks, and it's extremely unlikely that an
L1 vCPU would hold a spinlock while running L2. I.e. having a vCPU
executing in L1 yield to a vCPU running in L2 will not allow the L1 vCPU
to make forward progress, and vice versa.
While teaching KVM's "on spin" logic to only yield to other vCPUs in L2 is
doable, in all likelihood it would do more harm than good for most setups.
KVM has limited visibility into which L2 "vCPUs" belong to the same VM,
and thus share a locking domain. And even if L2 vCPUs are in the same
VM, KVM has no visilibity into L2 vCPU's that are scheduled out by the
L1 hypervisor.
Furthermore, KVM doesn't actually steal PAUSE exits from L1. If L1 is
intercepting PAUSE, KVM will route PAUSE exits to L1, not L0, as
nested_svm_intercept() gives priority to the vmcb12 intercept. As such,
overriding the count/threshold fields in vmcb02 with vmcb01's values is
nonsensical, as doing so clobbers all the training/learning that has been
done in L1.
Even worse, if L1 is not intercepting PAUSE, i.e. KVM is handling PAUSE
exits, then KVM will adjust the PLE knobs based on L2 behavior, which could
very well be detrimental to L1, e.g. due to essentially poisoning L1 PLE
training with bad data.
And copying the count from vmcb02 to vmcb01 on a nested VM-Exit makes even
less sense, because again, the purpose of PLE is to detect spinning vCPUs.
Whether or not a vCPU is spinning in L2 at the time of a nested VM-Exit
has no relevance as to the behavior of the vCPU when it executes in L1.
The only scenarios where any of this actually works is if at least one
of KVM or L1 is NOT intercepting PAUSE for the guest. Per the original
changelog, those were the only scenarios considered to be supported.
Disabling KVM's use of PLE makes it so the VM is always in a "supported"
mode.
Last, but certainly not least, using KVM's count/threshold instead of the
values provided by L1 is a blatant violation of the SVM architecture.
Fixes: 74fd41ed16fd ("KVM: x86: nSVM: support PAUSE filtering when L0 doesn't intercept PAUSE")
Cc: Maxim Levitsky <mlevitsk@...hat.com>
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/kvm/svm/nested.c | 44 +++++++++++++--------------------------
arch/x86/kvm/svm/svm.c | 6 ------
2 files changed, 14 insertions(+), 36 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index d77b094d9a4d..9330c15de6b7 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -171,6 +171,16 @@ void recalc_intercepts(struct vcpu_svm *svm)
if (!intercept_smi)
vmcb_clr_intercept(c, INTERCEPT_SMI);
+ /*
+ * Intercept PAUSE if and only if L1 wants to. KVM intercepts PAUSE so
+ * that a vCPU that may be spinning waiting for a lock can be scheduled
+ * out in favor of the vCPU that holds said lock. KVM doesn't support
+ * yielding across L2 vCPUs, as KVM has limited visilibity into which
+ * L2 vCPUs are in the same L2 VM, i.e. may be contending for locks.
+ */
+ if (!vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_PAUSE))
+ vmcb_clr_intercept(c, INTERCEPT_PAUSE);
+
if (nested_vmcb_needs_vls_intercept(svm)) {
/*
* If the virtual VMLOAD/VMSAVE is not enabled for the L2,
@@ -643,8 +653,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
struct kvm_vcpu *vcpu = &svm->vcpu;
struct vmcb *vmcb01 = svm->vmcb01.ptr;
struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
- u32 pause_count12;
- u32 pause_thresh12;
/*
* Filled at exit: exit_code, exit_code_hi, exit_info_1, exit_info_2,
@@ -736,31 +744,13 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
vmcb02->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
if (guest_cpu_cap_has(vcpu, X86_FEATURE_PAUSEFILTER))
- pause_count12 = svm->nested.ctl.pause_filter_count;
+ vmcb02->control.pause_filter_count = svm->nested.ctl.pause_filter_count;
else
- pause_count12 = 0;
+ vmcb02->control.pause_filter_count = 0;
if (guest_cpu_cap_has(vcpu, X86_FEATURE_PFTHRESHOLD))
- pause_thresh12 = svm->nested.ctl.pause_filter_thresh;
+ vmcb02->control.pause_filter_thresh = svm->nested.ctl.pause_filter_thresh;
else
- pause_thresh12 = 0;
- if (kvm_pause_in_guest(svm->vcpu.kvm)) {
- /* use guest values since host doesn't intercept PAUSE */
- vmcb02->control.pause_filter_count = pause_count12;
- vmcb02->control.pause_filter_thresh = pause_thresh12;
-
- } else {
- /* start from host values otherwise */
- vmcb02->control.pause_filter_count = vmcb01->control.pause_filter_count;
- vmcb02->control.pause_filter_thresh = vmcb01->control.pause_filter_thresh;
-
- /* ... but ensure filtering is disabled if so requested. */
- if (vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_PAUSE)) {
- if (!pause_count12)
- vmcb02->control.pause_filter_count = 0;
- if (!pause_thresh12)
- vmcb02->control.pause_filter_thresh = 0;
- }
- }
+ vmcb02->control.pause_filter_thresh = 0;
nested_svm_transition_tlb_flush(vcpu);
@@ -1033,12 +1023,6 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
vmcb12->control.event_inj = svm->nested.ctl.event_inj;
vmcb12->control.event_inj_err = svm->nested.ctl.event_inj_err;
- if (!kvm_pause_in_guest(vcpu->kvm)) {
- vmcb01->control.pause_filter_count = vmcb02->control.pause_filter_count;
- vmcb_mark_dirty(vmcb01, VMCB_INTERCEPTS);
-
- }
-
nested_svm_copy_common_state(svm->nested.vmcb02.ptr, svm->vmcb01.ptr);
svm_switch_vmcb(svm, &svm->vmcb01);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 7640a84e554a..ad5accc29db8 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1079,9 +1079,6 @@ static void grow_ple_window(struct kvm_vcpu *vcpu)
struct vmcb_control_area *control = &svm->vmcb->control;
int old = control->pause_filter_count;
- if (kvm_pause_in_guest(vcpu->kvm))
- return;
-
control->pause_filter_count = __grow_ple_window(old,
pause_filter_count,
pause_filter_count_grow,
@@ -1100,9 +1097,6 @@ static void shrink_ple_window(struct kvm_vcpu *vcpu)
struct vmcb_control_area *control = &svm->vmcb->control;
int old = control->pause_filter_count;
- if (kvm_pause_in_guest(vcpu->kvm))
- return;
-
control->pause_filter_count =
__shrink_ple_window(old,
pause_filter_count,
base-commit: eb723766b1030a23c38adf2348b7c3d1409d11f0
--
2.48.1.362.g079036d154-goog
Powered by blists - more mailing lists