[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <DD71GUKZKFPR.2OVPQ9KOI89YG@kernel.org>
Date: Wed, 01 Oct 2025 16:01:10 +0200
From: "Danilo Krummrich" <dakr@...nel.org>
To: "Alice Ryhl" <aliceryhl@...gle.com>
Cc: "Matthew Brost" <matthew.brost@...el.com>,
Thomas Hellström <thomas.hellstrom@...ux.intel.com>,
"Maarten Lankhorst" <maarten.lankhorst@...ux.intel.com>, "Maxime Ripard"
<mripard@...nel.org>, "Thomas Zimmermann" <tzimmermann@...e.de>, "David
Airlie" <airlied@...il.com>, "Simona Vetter" <simona@...ll.ch>, "Boris
Brezillon" <boris.brezillon@...labora.com>, "Steven Price"
<steven.price@....com>, "Daniel Almeida" <daniel.almeida@...labora.com>,
"Liviu Dudau" <liviu.dudau@....com>, <dri-devel@...ts.freedesktop.org>,
<linux-kernel@...r.kernel.org>, <rust-for-linux@...r.kernel.org>
Subject: Re: [PATCH v3 1/2] drm/gpuvm: add deferred vm_bo cleanup
On Wed Oct 1, 2025 at 12:41 PM CEST, Alice Ryhl wrote:
> +/*
> + * Must be called with GEM mutex held. After releasing GEM mutex,
> + * drm_gpuvm_bo_defer_free_unlocked() must be called.
> + */
> +static void
> +drm_gpuvm_bo_defer_free_locked(struct kref *kref)
> +{
> + struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
> + kref);
> + struct drm_gpuvm *gpuvm = vm_bo->vm;
> +
> + if (!drm_gpuvm_resv_protected(gpuvm)) {
> + drm_gpuvm_bo_list_del(vm_bo, extobj, true);
> + drm_gpuvm_bo_list_del(vm_bo, evict, true);
> + }
> +
> + list_del(&vm_bo->list.entry.gem);
> +}
> +
> +/*
> + * GEM mutex must not be held. Called after drm_gpuvm_bo_defer_free_locked().
> + */
> +static void
> +drm_gpuvm_bo_defer_free_unlocked(struct drm_gpuvm_bo *vm_bo)
> +{
> + struct drm_gpuvm *gpuvm = vm_bo->vm;
> +
> + llist_add(&vm_bo->list.entry.bo_defer, &gpuvm->bo_defer);
> +}
> +
> +static void
> +drm_gpuvm_bo_defer_free(struct kref *kref)
> +{
> + struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
> + kref);
> +
> + mutex_lock(&vm_bo->obj->gpuva.lock);
> + drm_gpuvm_bo_defer_free_locked(kref);
> + mutex_unlock(&vm_bo->obj->gpuva.lock);
> +
> + /*
> + * It's important that the GEM stays alive for the duration in which we
> + * hold the mutex, but the instant we add the vm_bo to bo_defer,
> + * another thread might call drm_gpuvm_bo_deferred_cleanup() and put
> + * the GEM. Therefore, to avoid kfreeing a mutex we are holding, we add
> + * the vm_bo to bo_defer *after* releasing the GEM's mutex.
> + */
> + drm_gpuvm_bo_defer_free_unlocked(vm_bo);
> +}
So, you're splitting drm_gpuvm_bo_defer_free() into two functions, one doing the
work that is required to be called with the gpuva lock held and one that does
the work that does not require a lock, which makes perfect sense.
However, the naming chosen for the two functions, i.e.
drm_gpuvm_bo_defer_free_unlocked() and drm_gpuvm_bo_defer_free_locked() is
confusing:
What you mean semantically mean is "do part 1 with lock held" and "do part 2
without lock held", but the the chosen names suggest that both functions are
identical, with the only difference that one takes the lock internally and the
other one requires the caller to take the lock.
It's probably better to name them after what they do and not what they're part
of. If you prefer the latter, that's fine with me too, but please choose a name
that makes this circumstance obvious.
With that addressed,
Acked-by: Danilo Krummrich <dakr@...nel.org>
Powered by blists - more mailing lists