[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANgfPd_M=De3L41+86y8V-5tYGPQ96UC3sq+D=N9EVCOvwXcKw@mail.gmail.com>
Date: Thu, 25 Mar 2021 14:47:53 -0700
From: Ben Gardon <bgardon@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/3] KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU
during NX zapping
On Thu, Mar 25, 2021 at 1:01 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> Honor the "flush needed" return from kvm_tdp_mmu_zap_gfn_range(), which
> does the flush itself if and only if it yields (which it will never do in
> this particular scenario), and otherwise expects the caller to do the
> flush. If pages are zapped from the TDP MMU but not the legacy MMU, then
> no flush will occur.
>
> Fixes: 29cf0f5007a2 ("kvm: x86/mmu: NX largepage recovery for TDP MMU")
> Cc: stable@...r.kernel.org
> Cc: Ben Gardon <bgardon@...gle.com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
Reviewed-by: Ben Gardon <bgardon@...gle.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index c6ed633594a2..5a53743b37bc 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5939,6 +5939,8 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
> struct kvm_mmu_page *sp;
> unsigned int ratio;
> LIST_HEAD(invalid_list);
> + bool flush = false;
> + gfn_t gfn_end;
> ulong to_zap;
>
> rcu_idx = srcu_read_lock(&kvm->srcu);
> @@ -5960,19 +5962,20 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
> lpage_disallowed_link);
> WARN_ON_ONCE(!sp->lpage_disallowed);
> if (is_tdp_mmu_page(sp)) {
> - kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn,
> - sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level));
> + gfn_end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);
> + flush = kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, gfn_end);
> } else {
> kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
> WARN_ON_ONCE(sp->lpage_disallowed);
> }
>
> if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
> - kvm_mmu_commit_zap_page(kvm, &invalid_list);
> + kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
> cond_resched_rwlock_write(&kvm->mmu_lock);
> + flush = false;
> }
> }
> - kvm_mmu_commit_zap_page(kvm, &invalid_list);
> + kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush);
>
> write_unlock(&kvm->mmu_lock);
> srcu_read_unlock(&kvm->srcu, rcu_idx);
> --
> 2.31.0.291.g576ba9dcdaf-goog
>
Powered by blists - more mailing lists