[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220605064342.309219-5-jiangshanlai@gmail.com>
Date: Sun, 5 Jun 2022 14:43:34 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
Lai Jiangshan <jiangshan.ljs@...group.com>
Subject: [PATCH 04/12] KVM: X86/MMU: Remove mmu_pages_clear_parents()
From: Lai Jiangshan <jiangshan.ljs@...group.com>
mmu_unsync_walk() is designed to be workable in a pagetable which has
unsync child bits set in the shadow pages in the pagetable but without
any unsync shadow pages.
This can be resulted when the unsync shadow pages of a pagetable
can be walked from other pagetables and have been synced or zapped
when other pagetables are synced or zapped.
So mmu_pages_clear_parents() is not required even when the callers of
mmu_unsync_walk() zap or sync the pagetable.
So remove mmu_pages_clear_parents() and the child bits can be cleared in
the next call of mmu_unsync_walk() in one go.
Removing mmu_pages_clear_parents() allows for further simplifying
mmu_unsync_walk() including removing the struct mmu_page_path since
the function is the only user of it.
Signed-off-by: Lai Jiangshan <jiangshan.ljs@...group.com>
---
arch/x86/kvm/mmu/mmu.c | 19 -------------------
1 file changed, 19 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index cc0207e26f6e..f35fd5c59c38 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1948,23 +1948,6 @@ static int mmu_pages_first(struct kvm_mmu_pages *pvec,
return mmu_pages_next(pvec, parents, 0);
}
-static void mmu_pages_clear_parents(struct mmu_page_path *parents)
-{
- struct kvm_mmu_page *sp;
- unsigned int level = 0;
-
- do {
- unsigned int idx = parents->idx[level];
- sp = parents->parent[level];
- if (!sp)
- return;
-
- WARN_ON(idx == INVALID_INDEX);
- clear_unsync_child_bit(sp, idx);
- level++;
- } while (!sp->unsync_children);
-}
-
static int mmu_sync_children(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *parent, bool can_yield)
{
@@ -1989,7 +1972,6 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
for_each_sp(pages, sp, parents, i) {
kvm_mmu_page_clear_unsync(vcpu->kvm, sp);
flush |= kvm_sync_page(vcpu, sp, &invalid_list) > 0;
- mmu_pages_clear_parents(&parents);
}
if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) {
kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, flush);
@@ -2298,7 +2280,6 @@ static int mmu_zap_unsync_children(struct kvm *kvm,
for_each_sp(pages, sp, parents, i) {
kvm_mmu_prepare_zap_page(kvm, sp, invalid_list);
- mmu_pages_clear_parents(&parents);
zapped++;
}
}
--
2.19.1.6.gb485710b
Powered by blists - more mailing lists