[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YJGmpOzaFy9E0f5T@google.com>
Date: Tue, 4 May 2021 19:55:16 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Ben Gardon <bgardon@...gle.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Peter Xu <peterx@...hat.com>, Peter Shier <pshier@...gle.com>,
Junaid Shahid <junaids@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH v2 1/7] KVM: x86/mmu: Track if shadow MMU active
On Thu, Apr 29, 2021, Ben Gardon wrote:
> Add a field to each VM to track if the shadow / legacy MMU is actually
> in use. If the shadow MMU is not in use, then that knowledge opens the
> door to other optimizations which will be added in future patches.
>
> Signed-off-by: Ben Gardon <bgardon@...gle.com>
> ---
> arch/x86/include/asm/kvm_host.h | 2 ++
> arch/x86/kvm/mmu/mmu.c | 10 +++++++++-
> arch/x86/kvm/mmu/mmu_internal.h | 2 ++
> arch/x86/kvm/mmu/tdp_mmu.c | 6 ++++--
> arch/x86/kvm/mmu/tdp_mmu.h | 4 ++--
> 5 files changed, 19 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index ad22d4839bcc..3900dcf2439e 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1122,6 +1122,8 @@ struct kvm_arch {
> */
> spinlock_t tdp_mmu_pages_lock;
> #endif /* CONFIG_X86_64 */
> +
> + bool shadow_mmu_active;
I'm not a fan of the name, "shadow mmu" in KVM almost always means shadow paging
of some form, whereas this covers both shadow paging and legacy TDP support.
But, I think we we can avoid bikeshedding by simply eliminating this flag. More
in later patches.
> };
>
> struct kvm_vm_stat {
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 930ac8a7e7c9..3975272321d0 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3110,6 +3110,11 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> return ret;
> }
>
> +void activate_shadow_mmu(struct kvm *kvm)
> +{
> + kvm->arch.shadow_mmu_active = true;
> +}
> +
> static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa,
> struct list_head *invalid_list)
> {
> @@ -3280,6 +3285,8 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
> }
> }
>
> + activate_shadow_mmu(vcpu->kvm);
> +
> write_lock(&vcpu->kvm->mmu_lock);
> r = make_mmu_pages_available(vcpu);
> if (r < 0)
> @@ -5467,7 +5474,8 @@ void kvm_mmu_init_vm(struct kvm *kvm)
> {
> struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
>
> - kvm_mmu_init_tdp_mmu(kvm);
> + if (!kvm_mmu_init_tdp_mmu(kvm))
> + activate_shadow_mmu(kvm);
Doesn't come into play yet, but I would strongly prefer to open code setting the
necessary flag instead of relying on the helper to never fail.
Powered by blists - more mailing lists