[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YbksiTgVdzN0Z6Dn@google.com>
Date: Tue, 14 Dec 2021 23:45:13 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Ben Gardon <bgardon@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Hou Wenlong <houwenlong93@...ux.alibaba.com>
Subject: Re: [PATCH 10/28] KVM: x86/mmu: Allow yielding when zapping GFNs for
defunct TDP MMU root
On Mon, Nov 22, 2021, Ben Gardon wrote:
> On Fri, Nov 19, 2021 at 8:51 PM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > Allow yielding when zapping SPTEs for a defunct TDP MMU root. Yielding
> > is safe from a TDP perspective, as the root is unreachable. The only
> > potential danger is putting a root from a non-preemptible context, and
> > KVM currently does not do so.
> >
> > Yield-unfriendly iteration uses for_each_tdp_mmu_root(), which doesn't
> > take a reference to each root (it requires mmu_lock be held for the
> > entire duration of the walk).
> >
> > tdp_mmu_next_root() is used only by the yield-friendly iterator.
> >
> > kvm_tdp_mmu_zap_invalidated_roots() is explicitly yield friendly.
> >
> > kvm_mmu_free_roots() => mmu_free_root_page() is a much bigger fan-out,
> > but is still yield-friendly in all call sites, as all callers can be
> > traced back to some combination of vcpu_run(), kvm_destroy_vm(), and/or
> > kvm_create_vm().
> >
> > Signed-off-by: Sean Christopherson <seanjc@...gle.com>
>
> Reviewed-by: Ben Gardon <bgardon@...gle.com>
>
> I'm glad to see this fixed. I assume we don't usually hit this in
> testing because most of the teardown happens in the zap-all path when
> we unregister for MMU notifiers and actually deleting a fully
> populated root while the VM is running is pretty rare.
Another *sigh*.
AFAIK, the above analysis is 100% correct, but there's a subtle problem with
yielding while putting the last reference to a root. If the mmu_notifier runs
in parallel, it (obviously) won't be able to get a reference to the root, and so
KVM will fail to ensure all references to an unmapped range are removed prior to
returning from the mmu_notifier.
But, I have a idea. Instead of synchronously zapping the defunct root, mark it
invalid, set the refcount back to '1', and then use a helper kthread to do the
teardown. Assuming there is exactly one helper, that would also address my
concerns with kvm_tdp_mmu_zap_invalidated_roots() being unsafe to call in parallel,
e.g. two zappers processing an invalid root would both put the last reference to
a root and trigger use-after-free of a different kind.
Powered by blists - more mailing lists