[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <894cf4cdb7e14b2a21dcf87bfeac4776cb695395.camel@mailbox.org>
Date: Thu, 08 May 2025 13:03:29 +0200
From: Philipp Stanner <phasta@...lbox.org>
To: Philipp Stanner <phasta@...nel.org>, Lyude Paul <lyude@...hat.com>,
Danilo Krummrich <dakr@...nel.org>, David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>, Matthew Brost <matthew.brost@...el.com>,
Christian König <ckoenig.leichtzumerken@...il.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>, Maxime Ripard
<mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>, Tvrtko
Ursulin <tvrtko.ursulin@...lia.com>
Cc: dri-devel@...ts.freedesktop.org, nouveau@...ts.freedesktop.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 6/6] drm/sched: Port unit tests to new cleanup design
On Thu, 2025-04-24 at 11:55 +0200, Philipp Stanner wrote:
> The unit tests so far took care manually of avoiding memory leaks
> that
> might have occurred when calling drm_sched_fini().
>
> The scheduler now takes care by itself of avoiding memory leaks if
> the
> driver provides the callback
> drm_sched_backend_ops.kill_fence_context().
>
> Implement that callback for the unit tests. Remove the manual cleanup
> code.
@Tvrtko: On a scale from 1-10, how much do you love this patch? :)
P.
>
> Signed-off-by: Philipp Stanner <phasta@...nel.org>
> ---
> .../gpu/drm/scheduler/tests/mock_scheduler.c | 34 ++++++++++++-----
> --
> 1 file changed, 21 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> index f999c8859cf7..a72d26ca8262 100644
> --- a/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> +++ b/drivers/gpu/drm/scheduler/tests/mock_scheduler.c
> @@ -228,10 +228,30 @@ static void mock_sched_free_job(struct
> drm_sched_job *sched_job)
> /* Mock job itself is freed by the kunit framework. */
> }
>
> +static void mock_sched_fence_context_kill(struct drm_gpu_scheduler
> *gpu_sched)
> +{
> + struct drm_mock_scheduler *sched =
> drm_sched_to_mock_sched(gpu_sched);
> + struct drm_mock_sched_job *job;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&sched->lock, flags);
> + list_for_each_entry(job, &sched->job_list, link) {
> + spin_lock(&job->lock);
> + if (!dma_fence_is_signaled_locked(&job->hw_fence)) {
> + dma_fence_set_error(&job->hw_fence, -
> ECANCELED);
> + dma_fence_signal_locked(&job->hw_fence);
> + }
> + complete(&job->done);
> + spin_unlock(&job->lock);
> + }
> + spin_unlock_irqrestore(&sched->lock, flags);
> +}
> +
> static const struct drm_sched_backend_ops drm_mock_scheduler_ops = {
> .run_job = mock_sched_run_job,
> .timedout_job = mock_sched_timedout_job,
> - .free_job = mock_sched_free_job
> + .free_job = mock_sched_free_job,
> + .kill_fence_context = mock_sched_fence_context_kill,
> };
>
> /**
> @@ -300,18 +320,6 @@ void drm_mock_sched_fini(struct
> drm_mock_scheduler *sched)
> drm_mock_sched_job_complete(job);
> spin_unlock_irqrestore(&sched->lock, flags);
>
> - /*
> - * Free completed jobs and jobs not yet processed by the DRM
> scheduler
> - * free worker.
> - */
> - spin_lock_irqsave(&sched->lock, flags);
> - list_for_each_entry_safe(job, next, &sched->done_list, link)
> - list_move_tail(&job->link, &list);
> - spin_unlock_irqrestore(&sched->lock, flags);
> -
> - list_for_each_entry_safe(job, next, &list, link)
> - mock_sched_free_job(&job->base);
> -
> drm_sched_fini(&sched->base);
> }
>
Powered by blists - more mailing lists