[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y7YfCKcalLkKnIV+@yzhao56-desk.sh.intel.com>
Date: Thu, 5 Jan 2023 08:51:20 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: Paolo Bonzini <pbonzini@...hat.com>,
Zhenyu Wang <zhenyuw@...ux.intel.com>,
Zhi Wang <zhi.a.wang@...el.com>, <kvm@...r.kernel.org>,
<intel-gvt-dev@...ts.freedesktop.org>,
<intel-gfx@...ts.freedesktop.org>, <linux-kernel@...r.kernel.org>,
Ben Gardon <bgardon@...gle.com>
Subject: Re: [PATCH 09/27] drm/i915/gvt: Protect gfn hash table with
dedicated mutex
On Tue, Jan 03, 2023 at 08:43:17PM +0000, Sean Christopherson wrote:
> On Wed, Dec 28, 2022, Yan Zhao wrote:
> > On Fri, Dec 23, 2022 at 12:57:21AM +0000, Sean Christopherson wrote:
> > > Add and use a new mutex, gfn_lock, to protect accesses to the hash table
> > > used to track which gfns are write-protected when shadowing the guest's
> > > GTT. This fixes a bug where kvmgt_page_track_write(), which doesn't hold
> > > kvm->mmu_lock, could race with intel_gvt_page_track_remove() and trigger
> > > a use-after-free.
> > >
> > > Fixing kvmgt_page_track_write() by taking kvm->mmu_lock is not an option
> > > as mmu_lock is a r/w spinlock, and intel_vgpu_page_track_handler() might
> > > sleep when acquiring vgpu->cache_lock deep down the callstack:
> > >
> > > intel_vgpu_page_track_handler()
> > > |
> > > |-> page_track->handler / ppgtt_write_protection_handler()
> > > |
> > > |-> ppgtt_handle_guest_write_page_table_bytes()
> > > |
> > > |-> ppgtt_handle_guest_write_page_table()
> > > |
> > > |-> ppgtt_handle_guest_entry_removal()
> > > |
> > > |-> ppgtt_invalidate_pte()
> > > |
> > > |-> intel_gvt_dma_unmap_guest_page()
> > > |
> > > |-> mutex_lock(&vgpu->cache_lock);
> > >
> > This gfn_lock could lead to deadlock in below sequence.
> >
> > (1) kvm_write_track_add_gfn() to GFN 1
> > (2) kvmgt_page_track_write() for GFN 1
> > kvmgt_page_track_write()
> > |
> > |->mutex_lock(&info->vgpu_lock)
> > |->intel_vgpu_page_track_handler (as is kvmgt_gfn_is_write_protected)
> > |
> > |->page_track->handler() (ppgtt_write_protection_handler())
> > |
> > |->ppgtt_handle_guest_write_page_table_bytes()
> > |
> > |->ppgtt_handle_guest_write_page_table()
> > |
> > |->ppgtt_handle_guest_entry_add() --> new_present
> > |
> > |->ppgtt_populate_spt_by_guest_entry()
> > |
> > |->intel_vgpu_enable_page_track() --> for GFN 2
> > |
> > |->intel_gvt_page_track_add()
> > |
> > |->mutex_lock(&info->gfn_lock) ===>deadlock
>
> Or even more simply,
>
> kvmgt_page_track_write()
> |
> -> intel_vgpu_page_track_handler()
> |
> -> intel_gvt_page_track_remove()
>
yes.
> >
> > Below fix based on this patch is to reuse vgpu_lock to protect the hash table
> > info->ptable.
> > Please check if it's good.
> >
> >
> > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > index b924ed079ad4..526bd973e784 100644
> > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > @@ -364,7 +364,7 @@ __kvmgt_protect_table_find(struct intel_vgpu *info, gfn_t gfn)
> > {
> > struct kvmgt_pgfn *p, *res = NULL;
> >
> > - lockdep_assert_held(&info->gfn_lock);
> > + lockdep_assert_held(&info->vgpu_lock);
> >
> > hash_for_each_possible(info->ptable, p, hnode, gfn) {
> > if (gfn == p->gfn) {
> > @@ -388,7 +388,7 @@ static void kvmgt_protect_table_add(struct intel_vgpu *info, gfn_t gfn)
> > {
> > struct kvmgt_pgfn *p;
> >
> > - lockdep_assert_held(&info->gfn_lock);
> > + lockdep_assert_held(&info->vgpu_lock);
>
> I'll just delete these assertions, the one in __kvmgt_protect_table_find() should
> cover everything and is ultimately the assert that matters.
>
> > @@ -1629,12 +1629,11 @@ static void kvmgt_page_track_remove_region(gfn_t gfn, unsigned long nr_pages,
> > struct intel_vgpu *info =
> > container_of(node, struct intel_vgpu, track_node);
> >
> > - mutex_lock(&info->gfn_lock);
> > + lockdep_assert_held(&info->vgpu_lock);
>
> This path needs to manually take vgpu_lock as it's called from KVM. IIRC, this
> is the main reason I tried adding a new lock. That and I had a hell of a time
> figuring out whether or not vgpu_lock would actually be held.
Right. In the path of kvmgt_page_track_remove_region(),
mutex_lock(&info->vgpu_lock) and mutex_unlock(&info->vgpu_lock) are
required.
static void kvmgt_page_track_remove_region(gfn_t gfn, unsigned long nr_pages,
struct kvm_page_track_notifier_node *node)
{
unsigned long i;
struct intel_vgpu *info =
container_of(node, struct intel_vgpu, track_node);
mutex_lock(&info->vgpu_lock);
for (i = 0; i < nr_pages; i++) {
if (kvmgt_gfn_is_write_protected(info, gfn + i))
kvmgt_protect_table_del(info, gfn + i);
}
mutex_unlock(&info->vgpu_lock);
}
The reason I previously could have lockdep_assert_held(&info->vgpu_lock) passed
is that I didn't get LOCKDEP configured, so it's basically a void.
(sorry, though I actually also called mutex_is_locked(&info->vcpu_lock)
in some paths to check lockdep_assert_held() worked properly. But it's my
fault not to double check it's compiled correctly).
>
> Looking at this with fresh eyes, AFAICT intel_vgpu_reset_gtt() is the only other
> path that can reach __kvmgt_protect_table_find() without holding vgpu_lock, by
> way of intel_gvt_page_track_remove(). But unless there's magic I'm missing, that's
> dead code and can simply be deleted.
Yes, I found intel_vgpu_reset_gtt() has not been called since
ba25d977571e1551b7032d6104e49efd6f88f8ad.
Powered by blists - more mailing lists