[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YJrgPoORnyf9VVvY@google.com>
Date: Tue, 11 May 2021 19:51:26 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Ben Gardon <bgardon@...gle.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Peter Xu <peterx@...hat.com>, Peter Shier <pshier@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>,
Kai Huang <kai.huang@...el.com>,
Keqian Zhu <zhukeqian1@...wei.com>,
David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH v4 6/7] KVM: x86/mmu: Skip rmap operations if rmaps not
allocated
On Tue, May 11, 2021, Ben Gardon wrote:
> @@ -1260,9 +1268,12 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
> int i;
> bool write_protected = false;
>
> - for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) {
> - rmap_head = __gfn_to_rmap(gfn, i, slot);
> - write_protected |= __rmap_write_protect(kvm, rmap_head, true);
> + if (kvm->arch.memslots_have_rmaps) {
> + for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) {
> + rmap_head = __gfn_to_rmap(gfn, i, slot);
> + write_protected |= __rmap_write_protect(kvm, rmap_head,
> + true);
I vote to let "true" poke out.
> + }
> }
>
> if (is_tdp_mmu_enabled(kvm))
...
> @@ -5440,7 +5455,8 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
> */
> kvm_reload_remote_mmus(kvm);
>
> - kvm_zap_obsolete_pages(kvm);
> + if (kvm->arch.memslots_have_rmaps)
> + kvm_zap_obsolete_pages(kvm);
Hmm, for cases where we're iterating over the list of active_mmu_pages, I would
prefer to either leave the code as-is or short-circuit the helpers with a more
explicit:
if (list_empty(&kvm->arch.active_mmu_pages))
return ...;
I'd probably vote for leaving the code as it is; the loop iteration and list_empty
check in kvm_mmu_commit_zap_page() add a single compare-and-jump in the worst
case scenario.
In other words, restrict use of memslots_have_rmaps to flows that directly
walk the rmaps, as opposed to partially overloading memslots_have_rmaps to mean
"is using legacy MMU".
> write_unlock(&kvm->mmu_lock);
>
...
> @@ -5681,6 +5702,14 @@ void kvm_mmu_zap_all(struct kvm *kvm)
> int ign;
>
> write_lock(&kvm->mmu_lock);
> + if (is_tdp_mmu_enabled(kvm))
> + kvm_tdp_mmu_zap_all(kvm);
> +
> + if (!kvm->arch.memslots_have_rmaps) {
> + write_unlock(&kvm->mmu_lock);
> + return;
Another case where falling through to walking active_mmu_pages is perfectly ok.
> + }
> +
> restart:
> list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
> if (WARN_ON(sp->role.invalid))
> @@ -5693,9 +5722,6 @@ void kvm_mmu_zap_all(struct kvm *kvm)
>
> kvm_mmu_commit_zap_page(kvm, &invalid_list);
>
> - if (is_tdp_mmu_enabled(kvm))
> - kvm_tdp_mmu_zap_all(kvm);
> -
> write_unlock(&kvm->mmu_lock);
> }
>
> --
> 2.31.1.607.g51e8a6a459-goog
>
Powered by blists - more mailing lists