[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210812050717.3176478-3-seanjc@google.com>
Date: Wed, 11 Aug 2021 22:07:17 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Ben Gardon <bgardon@...gle.com>
Subject: [PATCH 2/2] KVM: x86/mmu: Don't step down in the TDP iterator when
zapping all SPTEs
Set the min_level for the TDP iterator at the root level when zapping all
SPTEs so that the _iterator_ only processes top-level SPTEs. Zapping a
non-leaf SPTE will recursively zap all its children, thus there is no
need for the iterator to attempt to step down. This avoids rereading all
the top-level SPTEs after they are zapped by causing try_step_down() to
short-circuit.
Cc: Ben Gardon <bgardon@...gle.com>
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/kvm/mmu/tdp_mmu.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 6566f70a31c1..aec069c18d82 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -751,6 +751,16 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
{
bool zap_all = (end == ZAP_ALL_END);
struct tdp_iter iter;
+ int min_level;
+
+ /*
+ * No need to step down in the iterator when zapping all SPTEs, zapping
+ * the top-level non-leaf SPTEs will recurse on all their children.
+ */
+ if (zap_all)
+ min_level = root->role.level;
+ else
+ min_level = PG_LEVEL_4K;
/*
* Bound the walk at host.MAXPHYADDR, guest accesses beyond that will
@@ -763,7 +773,8 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
rcu_read_lock();
- tdp_root_for_each_pte(iter, root, start, end) {
+ for_each_tdp_pte_min_level(iter, root->spt, root->role.level,
+ min_level, start, end) {
retry:
if (can_yield &&
tdp_mmu_iter_cond_resched(kvm, &iter, flush, shared)) {
--
2.33.0.rc1.237.g0d66db33f3-goog
Powered by blists - more mailing lists