[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230714-drm-sched-fixes-v1-3-c567249709f7@asahilina.net>
Date: Fri, 14 Jul 2023 17:21:31 +0900
From: Asahi Lina <lina@...hilina.net>
To: Luben Tuikov <luben.tuikov@....com>,
David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>,
Sumit Semwal <sumit.semwal@...aro.org>,
Christian König <christian.koenig@....com>
Cc: Faith Ekstrand <faith.ekstrand@...labora.com>,
Alyssa Rosenzweig <alyssa@...enzweig.io>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
linux-media@...r.kernel.org, asahi@...ts.linux.dev,
Asahi Lina <lina@...hilina.net>
Subject: [PATCH 3/3] drm/scheduler: Clean up jobs when the scheduler is
torn down.
drm_sched_fini() currently leaves any pending jobs dangling, which
causes segfaults and other badness when job completion fences are
signaled after the scheduler is torn down.
Explicitly detach all jobs from their completion callbacks and free
them. This makes it possible to write a sensible safe abstraction for
drm_sched, without having to externally duplicate the tracking of
in-flight jobs.
This shouldn't regress any existing drivers, since calling
drm_sched_fini() with any pending jobs is broken and this change should
be a no-op if there are no pending jobs.
Signed-off-by: Asahi Lina <lina@...hilina.net>
---
drivers/gpu/drm/scheduler/sched_main.c | 32 ++++++++++++++++++++++++++++++--
1 file changed, 30 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 1f3bc3606239..a4da4aac0efd 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1186,10 +1186,38 @@ EXPORT_SYMBOL(drm_sched_init);
void drm_sched_fini(struct drm_gpu_scheduler *sched)
{
struct drm_sched_entity *s_entity;
+ struct drm_sched_job *s_job, *tmp;
int i;
- if (sched->thread)
- kthread_stop(sched->thread);
+ if (!sched->thread)
+ return;
+
+ /*
+ * Stop the scheduler, detaching all jobs from their hardware callbacks
+ * and cleaning up complete jobs.
+ */
+ drm_sched_stop(sched, NULL);
+
+ /*
+ * Iterate through the pending job list and free all jobs.
+ * This assumes the driver has either guaranteed jobs are already stopped, or that
+ * otherwise it is responsible for keeping any necessary data structures for
+ * in-progress jobs alive even when the free_job() callback is called early (e.g. by
+ * putting them in its own queue or doing its own refcounting).
+ */
+ list_for_each_entry_safe(s_job, tmp, &sched->pending_list, list) {
+ spin_lock(&sched->job_list_lock);
+ list_del_init(&s_job->list);
+ spin_unlock(&sched->job_list_lock);
+
+ dma_fence_set_error(&s_job->s_fence->finished, -ESRCH);
+ drm_sched_fence_finished(s_job->s_fence);
+
+ WARN_ON(s_job->s_fence->parent);
+ sched->ops->free_job(s_job);
+ }
+
+ kthread_stop(sched->thread);
for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
struct drm_sched_rq *rq = &sched->sched_rq[i];
--
2.40.1
Powered by blists - more mailing lists