[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aMEdMg_3ljC27i1-@google.com>
Date: Wed, 10 Sep 2025 06:39:46 +0000
From: Alice Ryhl <aliceryhl@...gle.com>
To: "Thomas Hellström" <thomas.hellstrom@...ux.intel.com>
Cc: Danilo Krummrich <dakr@...nel.org>, Matthew Brost <matthew.brost@...el.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>, Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>, David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Boris Brezillon <boris.brezillon@...labora.com>, Steven Price <steven.price@....com>,
Daniel Almeida <daniel.almeida@...labora.com>, Liviu Dudau <liviu.dudau@....com>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
rust-for-linux@...r.kernel.org
Subject: Re: [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup
On Tue, Sep 09, 2025 at 04:20:32PM +0200, Thomas Hellström wrote:
> On Tue, 2025-09-09 at 13:36 +0000, Alice Ryhl wrote:
> > When using GPUVM in immediate mode, it is necessary to call
> > drm_gpuvm_unlink() from the fence signalling critical path. However,
> > unlink may call drm_gpuvm_bo_put(), which causes some challenges:
> >
> > 1. drm_gpuvm_bo_put() often requires you to take resv locks, which
> > you
> > can't do from the fence signalling critical path.
> > 2. drm_gpuvm_bo_put() calls drm_gem_object_put(), which is often
> > going
> > to be unsafe to call from the fence signalling critical path.
> >
> > To solve these issues, add a deferred version of drm_gpuvm_unlink()
> > that
> > adds the vm_bo to a deferred cleanup list, and then clean it up
> > later.
> >
> > The new methods take the GEMs GPUVA lock internally rather than
> > letting
> > the caller do it because it also needs to perform an operation after
> > releasing the mutex again. This is to prevent freeing the GEM while
> > holding the mutex (more info as comments in the patch). This means
> > that
> > the new methods can only be used with DRM_GPUVM_IMMEDIATE_MODE.
> >
> > Signed-off-by: Alice Ryhl <aliceryhl@...gle.com>
> > ---
> > drivers/gpu/drm/drm_gpuvm.c | 174
> > ++++++++++++++++++++++++++++++++++++++++++++
> > include/drm/drm_gpuvm.h | 26 +++++++
> > 2 files changed, 200 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gpuvm.c
> > b/drivers/gpu/drm/drm_gpuvm.c
> > index
> > 78a1a4f095095e9379bdf604d583f6c8b9863ccb..5aa8b3813019705f70101950af2
> > d8fe4e648e9d0 100644
> > --- a/drivers/gpu/drm/drm_gpuvm.c
> > +++ b/drivers/gpu/drm/drm_gpuvm.c
> > @@ -876,6 +876,27 @@ __drm_gpuvm_bo_list_add(struct drm_gpuvm *gpuvm,
> > spinlock_t *lock,
> > cond_spin_unlock(lock, !!lock);
> > }
> >
> > +/**
> > + * drm_gpuvm_bo_is_dead() - check whether this vm_bo is scheduled
>
> NIT: Is zombie a better name than dead?
I could see that name make sense.
> > /**
> > * drm_gpuvm_bo_list_add() - insert a vm_bo into the given list
> > * @__vm_bo: the &drm_gpuvm_bo
> > @@ -1081,6 +1102,9 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const
> > char *name,
> > INIT_LIST_HEAD(&gpuvm->evict.list);
> > spin_lock_init(&gpuvm->evict.lock);
> >
> > + INIT_LIST_HEAD(&gpuvm->bo_defer.list);
> > + spin_lock_init(&gpuvm->bo_defer.lock);
> > +
>
> This list appears to exactly follow the pattern a lockless list was
> designed for. Saves some space in the vm_bo and gets rid of the
> excessive locking. <include/linux/llist.h>
Good point.
Alice
Powered by blists - more mailing lists