[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZBHlIuhED8y3JIu1@google.com>
Date: Wed, 15 Mar 2023 08:32:50 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Yan Zhao <yan.y.zhao@...el.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Zhenyu Wang <zhenyuw@...ux.intel.com>,
Zhi Wang <zhi.a.wang@...el.com>, kvm@...r.kernel.org,
intel-gvt-dev@...ts.freedesktop.org,
intel-gfx@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
Ben Gardon <bgardon@...gle.com>
Subject: Re: [PATCH v2 11/27] KVM: x86/mmu: Don't rely on page-track mechanism
to flush on memslot change
On Wed, Mar 15, 2023, Yan Zhao wrote:
> On Fri, Mar 10, 2023 at 04:22:42PM -0800, Sean Christopherson wrote:
> ...
> > -static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
> > - struct kvm_memory_slot *slot,
> > - struct kvm_page_track_notifier_node *node)
> > -{
> > - kvm_mmu_zap_all_fast(kvm);
> > -}
> > -
> > int kvm_mmu_init_vm(struct kvm *kvm)
> > {
> > struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
> > @@ -6110,7 +6103,6 @@ int kvm_mmu_init_vm(struct kvm *kvm)
> > }
> >
> > node->track_write = kvm_mmu_pte_write;
> > - node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
> > kvm_page_track_register_notifier(kvm, node);
> >
> > kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index f706621c35b8..29dd6c97d145 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -12662,6 +12662,8 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm)
> > void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
> > struct kvm_memory_slot *slot)
> > {
> > + kvm_mmu_zap_all_fast(kvm);
> Could we still call kvm_mmu_invalidate_zap_pages_in_memslot() here?
> As I know, for TDX, its version of
> kvm_mmu_invalidate_zap_pages_in_memslot() is like
>
> static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
> struct kvm_memory_slot *slot,
> struct kvm_page_track_notifier_node *node)
> {
> if (kvm_gfn_shared_mask(kvm))
> kvm_mmu_zap_memslot(kvm, slot);
> else
> kvm_mmu_zap_all_fast(kvm);
> }
>
> Maybe this kind of judegment is better to be confined in mmu.c?
Hmm, yeah, I agree. The only reason I exposed kvm_mmu_zap_all_fast() is because
kvm_mmu_zap_all() is already exposed for kvm_arch_flush_shadow_all() and it felt
weird/wrong to split those. But that's the only usage of kvm_mmu_zap_all(), so
a better approach to maintain consistency would be to move
kvm_arch_flush_shadow_{all,memslot}() into mmu.c. I'll do that in the next version.
Powered by blists - more mailing lists