[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220427200314.276673-19-mlevitsk@redhat.com>
Date: Wed, 27 Apr 2022 23:03:13 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: kvm@...r.kernel.org
Cc: Wanpeng Li <wanpengli@...cent.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Jani Nikula <jani.nikula@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>,
Rodrigo Vivi <rodrigo.vivi@...el.com>,
Zhenyu Wang <zhenyuw@...ux.intel.com>,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Tom Lendacky <thomas.lendacky@....com>,
Ingo Molnar <mingo@...hat.com>,
David Airlie <airlied@...ux.ie>,
Thomas Gleixner <tglx@...utronix.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
intel-gfx@...ts.freedesktop.org,
Sean Christopherson <seanjc@...gle.com>,
Daniel Vetter <daniel@...ll.ch>,
Borislav Petkov <bp@...en8.de>, Joerg Roedel <joro@...tes.org>,
linux-kernel@...r.kernel.org, Jim Mattson <jmattson@...gle.com>,
Zhi Wang <zhi.a.wang@...el.com>,
Brijesh Singh <brijesh.singh@....com>,
"H. Peter Anvin" <hpa@...or.com>,
intel-gvt-dev@...ts.freedesktop.org,
dri-devel@...ts.freedesktop.org,
Maxim Levitsky <mlevitsk@...hat.com>
Subject: [RFC PATCH v3 18/19] KVM: x86: SVM/nSVM: add optional non strict AVIC doorbell mode
By default, peers of a vCPU, can send it doorbell messages,
only when that vCPU is assigned (loaded) a physical CPU.
However when doorbell messages are not allowed, this causes all of
the vCPU's peers to get VM exits, which is suboptimal when this
vCPU is not halted, and therefore just temporary not running
in the guest mode due to being scheduled out and/or
having a userspace VM exit.
In this case peers can't make this vCPU enter guest mode faster,
and thus the VM exits they get don't do anything good.
Therefore this patch introduces (disabled by default)
new non strict mode (enabled by setting avic_doorbell_strict
kvm_amd module param to 0), such as when it is enabled,
and a vCPU is scheduled out but not halted, its peers can continue
sending doorbell messages to the last physical CPU where the vCPU was
last running.
Security wise, a malicious guest with a compromised guest kernel,
can in this mode in some cases slow down whatever is
running on the last physical CPU where a vCPU was running
by spamming it with doorbell messages (hammering on ICR),
from its another vCPU.
Thus this mode is disabled by default.
However if admin policy is to have 1:1 vCPU/pCPU mapping,
this mode can be useful to avoid VM exits when a vCPU has
a userspace VM exit and such.
Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
---
arch/x86/kvm/svm/avic.c | 16 +++++++++-------
arch/x86/kvm/svm/svm.c | 25 +++++++++++++++++++++----
2 files changed, 30 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 149df26e17462..4bf0f00f13c12 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -1704,7 +1704,7 @@ avic_update_iommu_vcpu_affinity(struct kvm_vcpu *vcpu, int cpu, bool r)
void __avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
- u64 entry;
+ u64 old_entry, new_entry;
int h_physical_id = kvm_cpu_get_apicid(cpu);
struct vcpu_svm *svm = to_svm(vcpu);
@@ -1723,14 +1723,16 @@ void __avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
if (kvm_vcpu_is_blocking(vcpu))
return;
- entry = READ_ONCE(*(svm->avic_physical_id_cache));
- WARN_ON(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK);
+ old_entry = READ_ONCE(*(svm->avic_physical_id_cache));
+ new_entry = old_entry;
- entry &= ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK;
- entry |= (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK);
- entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+ new_entry &= ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK;
+ new_entry |= (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK);
+ new_entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+
+ if (old_entry != new_entry)
+ WRITE_ONCE(*(svm->avic_physical_id_cache), new_entry);
- WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
avic_update_iommu_vcpu_affinity(vcpu, h_physical_id, true);
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index b31bab832360e..099329711ad13 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -191,6 +191,10 @@ module_param(avic, bool, 0444);
static bool force_avic;
module_param_unsafe(force_avic, bool, 0444);
+static bool avic_doorbell_strict = true;
+module_param(avic_doorbell_strict, bool, 0444);
+
+
bool __read_mostly dump_invalid_vmcb;
module_param(dump_invalid_vmcb, bool, 0644);
@@ -1402,10 +1406,23 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
static void svm_vcpu_put(struct kvm_vcpu *vcpu)
{
- if (kvm_vcpu_apicv_active(vcpu))
- __avic_vcpu_put(vcpu);
-
- __nested_avic_put(vcpu);
+ /*
+ * Forbid this vCPU's peers to send doorbell messages.
+ * Unless non strict doorbell mode is used.
+ *
+ * In this mode, doorbell messages are forbidden only when a vCPU
+ * blocks, since for correctness only in this case it is needed
+ * to intercept an IPI to wake up a vCPU.
+ *
+ * However this reduces the isolation of the guest since flood of
+ * spurious doorbell messages can slow a CPU running another task
+ * while this vCPU is scheduled out.
+ */
+ if (avic_doorbell_strict) {
+ if (kvm_vcpu_apicv_active(vcpu))
+ __avic_vcpu_put(vcpu);
+ __nested_avic_put(vcpu);
+ }
svm_prepare_host_switch(vcpu);
--
2.26.3
Powered by blists - more mailing lists