[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220207155447.840194-19-mlevitsk@redhat.com>
Date: Mon, 7 Feb 2022 17:54:35 +0200
From: Maxim Levitsky <mlevitsk@...hat.com>
To: kvm@...r.kernel.org
Cc: Tony Luck <tony.luck@...el.com>,
"Chang S. Bae" <chang.seok.bae@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Wanpeng Li <wanpengli@...cent.com>,
Ingo Molnar <mingo@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>,
linux-kernel@...r.kernel.org,
Rodrigo Vivi <rodrigo.vivi@...el.com>,
"H. Peter Anvin" <hpa@...or.com>,
intel-gvt-dev@...ts.freedesktop.org,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Joerg Roedel <joro@...tes.org>,
Sean Christopherson <seanjc@...gle.com>,
David Airlie <airlied@...ux.ie>,
Zhi Wang <zhi.a.wang@...el.com>,
Brijesh Singh <brijesh.singh@....com>,
Jim Mattson <jmattson@...gle.com>, x86@...nel.org,
Daniel Vetter <daniel@...ll.ch>,
Borislav Petkov <bp@...en8.de>,
Zhenyu Wang <zhenyuw@...ux.intel.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Jani Nikula <jani.nikula@...ux.intel.com>,
Maxim Levitsky <mlevitsk@...hat.com>
Subject: [PATCH RESEND 18/30] KVM: x86: mmu: add strict mmu mode
Add an (mostly debug) option to force KVM's shadow mmu
to never have unsync pages.
This is useful in some cases to debug it.
It is also useful for some legacy guest OSes which don't
flush TLBs correctly, and thus don't work on modern
CPUs which have speculative MMUs.
Using this option together with legacy paging (npt/ept=0)
allows to correctly simulate such old MMU while still
getting most of the benefits of the virtualization.
Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
---
arch/x86/kvm/mmu/mmu.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 43c7abdd6b70f..fa2da6990703f 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -91,6 +91,10 @@ __MODULE_PARM_TYPE(nx_huge_pages_recovery_period_ms, "uint");
static bool __read_mostly force_flush_and_sync_on_reuse;
module_param_named(flush_on_reuse, force_flush_and_sync_on_reuse, bool, 0644);
+
+bool strict_mmu;
+module_param(strict_mmu, bool, 0644);
+
/*
* When setting this variable to true it enables Two-Dimensional-Paging
* where the hardware walks 2 page tables:
@@ -2703,7 +2707,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
}
wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch,
- true, host_writable, &spte);
+ !strict_mmu, host_writable, &spte);
if (*sptep == spte) {
ret = RET_PF_SPURIOUS;
@@ -5139,6 +5143,11 @@ static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa,
*/
static bool detect_write_flooding(struct kvm_mmu_page *sp)
{
+ /*
+ * When using non speculating MMU, use a bit higher threshold
+ * for write flood detection
+ */
+ int threshold = strict_mmu ? 10 : 3;
/*
* Skip write-flooding detected for the sp whose level is 1, because
* it can become unsync, then the guest page is not write-protected.
@@ -5147,7 +5156,7 @@ static bool detect_write_flooding(struct kvm_mmu_page *sp)
return false;
atomic_inc(&sp->write_flooding_count);
- return atomic_read(&sp->write_flooding_count) >= 3;
+ return atomic_read(&sp->write_flooding_count) >= threshold;
}
/*
--
2.26.3
Powered by blists - more mailing lists