[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230803235017.GA2257301@ls.amr.corp.intel.com>
Date: Thu, 3 Aug 2023 16:50:17 -0700
From: Isaku Yamahata <isaku.yamahata@...il.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Zhenyu Wang <zhenyuw@...ux.intel.com>,
Zhi Wang <zhi.a.wang@...el.com>, kvm@...r.kernel.org,
intel-gvt-dev@...ts.freedesktop.org,
intel-gfx@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
Yan Zhao <yan.y.zhao@...el.com>,
Yongwei Ma <yongwei.ma@...el.com>,
Ben Gardon <bgardon@...gle.com>, isaku.yamahata@...il.com
Subject: Re: [PATCH v4 12/29] KVM: x86/mmu: Move
kvm_arch_flush_shadow_{all,memslot}() to mmu.c
On Fri, Jul 28, 2023 at 06:35:18PM -0700,
Sean Christopherson <seanjc@...gle.com> wrote:
> Move x86's implementation of kvm_arch_flush_shadow_{all,memslot}() into
> mmu.c, and make kvm_mmu_zap_all() static as it was globally visible only
> for kvm_arch_flush_shadow_all(). This will allow refactoring
> kvm_arch_flush_shadow_memslot() to call kvm_mmu_zap_all() directly without
> having to expose kvm_mmu_zap_all_fast() outside of mmu.c. Keeping
> everything in mmu.c will also likely simplify supporting TDX, which
> intends to do zap only relevant SPTEs on memslot updates.
Yes, it helps TDX code cleaner to move mmu related function under mmu.c.
Reviewed-by: Isaku Yamahata <isaku.yamahata@...el.com>
Thanks,
>
> No functional change intended.
>
> Suggested-by: Yan Zhao <yan.y.zhao@...el.com>
> Tested-by: Yongwei Ma <yongwei.ma@...el.com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 -
> arch/x86/kvm/mmu/mmu.c | 13 ++++++++++++-
> arch/x86/kvm/x86.c | 11 -----------
> 3 files changed, 12 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 28bd38303d70..856ec22aceb6 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1832,7 +1832,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
> const struct kvm_memory_slot *memslot);
> void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
> const struct kvm_memory_slot *memslot);
> -void kvm_mmu_zap_all(struct kvm *kvm);
> void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen);
> void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages);
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index ec169f5c7dce..c6dee659d592 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6732,7 +6732,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
> */
> }
>
> -void kvm_mmu_zap_all(struct kvm *kvm)
> +static void kvm_mmu_zap_all(struct kvm *kvm)
> {
> struct kvm_mmu_page *sp, *node;
> LIST_HEAD(invalid_list);
> @@ -6757,6 +6757,17 @@ void kvm_mmu_zap_all(struct kvm *kvm)
> write_unlock(&kvm->mmu_lock);
> }
>
> +void kvm_arch_flush_shadow_all(struct kvm *kvm)
> +{
> + kvm_mmu_zap_all(kvm);
> +}
> +
> +void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
> + struct kvm_memory_slot *slot)
> +{
> + kvm_page_track_flush_slot(kvm, slot);
> +}
> +
> void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
> {
> WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS);
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index a6b9bea62fb8..059571d5abed 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12776,17 +12776,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
> kvm_arch_free_memslot(kvm, old);
> }
>
> -void kvm_arch_flush_shadow_all(struct kvm *kvm)
> -{
> - kvm_mmu_zap_all(kvm);
> -}
> -
> -void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
> - struct kvm_memory_slot *slot)
> -{
> - kvm_page_track_flush_slot(kvm, slot);
> -}
> -
> static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
> {
> return (is_guest_mode(vcpu) &&
> --
> 2.41.0.487.g6d72f3e995-goog
>
--
Isaku Yamahata <isaku.yamahata@...il.com>
Powered by blists - more mailing lists