[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250224120104.26211-4-phasta@kernel.org>
Date: Mon, 24 Feb 2025 13:01:04 +0100
From: Philipp Stanner <phasta@...nel.org>
To: Matthew Brost <matthew.brost@...el.com>,
Danilo Krummrich <dakr@...nel.org>,
Philipp Stanner <phasta@...nel.org>,
Christian König <ckoenig.leichtzumerken@...il.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>,
Tvrtko Ursulin <tvrtko.ursulin@...lia.com>
Cc: dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org,
Philipp Stanner <pstanner@...hat.com>
Subject: [PATCH v6 2/3] drm/sched: Document run_job() refcount hazard
From: Philipp Stanner <pstanner@...hat.com>
drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler.
That fence is signalled by the driver once the hardware completed the
associated job. The scheduler does not increment the reference count on
that fence, but implicitly expects to inherit this fence from run_job().
This is relatively subtle and prone to misunderstandings.
This implies that, to keep a reference for itself, a driver needs to
call dma_fence_get() in addition to dma_fence_init() in that callback.
It's further complicated by the fact that the scheduler even decrements
the refcount in drm_sched_run_job_work() since it created a new
reference in drm_sched_fence_scheduled(). It does, however, still use
its pointer to the fence after calling dma_fence_put() - which is safe
because of the aforementioned new reference, but actually still violates
the refcounting rules.
Move the call to dma_fence_put() to the position behind the last usage
of the fence.
Document the necessity to increment the reference count in
drm_sched_backend_ops.run_job().
Suggested-by: Danilo Krummrich <dakr@...nel.org>
Signed-off-by: Philipp Stanner <pstanner@...hat.com>
Reviewed-by: Danilo Krummrich <dakr@...nel.org>
---
drivers/gpu/drm/scheduler/sched_main.c | 9 ++++++---
include/drm/gpu_scheduler.h | 1 +
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 8c36a59afb72..965021e76fde 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1217,20 +1217,23 @@ static void drm_sched_run_job_work(struct work_struct *w)
drm_sched_job_begin(sched_job);
trace_drm_run_job(sched_job, entity);
+ /*
+ * The run_job() callback must by definition return a fence whose
+ * refcount has been incremented for the scheduler already.
+ */
fence = sched->ops->run_job(sched_job);
complete_all(&entity->entity_idle);
drm_sched_fence_scheduled(s_fence, fence);
if (!IS_ERR_OR_NULL(fence)) {
- /* Drop for original kref_init of the fence */
- dma_fence_put(fence);
-
r = dma_fence_add_callback(fence, &sched_job->cb,
drm_sched_job_done_cb);
if (r == -ENOENT)
drm_sched_job_done(sched_job, fence->error);
else if (r)
DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r);
+
+ dma_fence_put(fence);
} else {
drm_sched_job_done(sched_job, IS_ERR(fence) ?
PTR_ERR(fence) : 0);
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 5d0394f40aae..c59a903deebe 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -447,6 +447,7 @@ struct drm_sched_backend_ops {
* * On success: dma_fence the driver must signal once the hardware has
* completed the job ("hardware fence").
* * On failure: NULL or an ERR_PTR.
+ *
*/
struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
--
2.48.1
Powered by blists - more mailing lists