[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <015c204472811734b1e2a12d044ac3b13926c617.camel@mailbox.org>
Date: Thu, 30 Oct 2025 16:23:44 +0100
From: Philipp Stanner <phasta@...lbox.org>
To: Philipp Stanner <phasta@...nel.org>, Matthew Brost
<matthew.brost@...el.com>, Danilo Krummrich <dakr@...nel.org>, Christian
König <ckoenig.leichtzumerken@...il.com>, David Airlie
<airlied@...il.com>, Simona Vetter <simona@...ll.ch>, Tvrtko Ursulin
<tvrtko.ursulin@...lia.com>
Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
linux-media@...r.kernel.org
Subject: Re: [PATCH v3] drm/sched: Add warning for removing hack in
drm_sched_fini()
On Thu, 2025-10-23 at 14:34 +0200, Philipp Stanner wrote:
> The assembled developers agreed at the X.Org Developers Conference 2025
> that the hack added for amdgpu in drm_sched_fini() shall be removed. It
> shouldn't be needed by amdgpu anymore.
>
> As it's unclear whether all drivers really follow the life time rule of
> entities having to be torn down before their scheduler, it is reasonable
> to warn for a while before removing the hack.
>
> Add a warning in drm_sched_fini() that fires if an entity is still
> active.
>
> Signed-off-by: Philipp Stanner <phasta@...nel.org>
Can someone review this?
At XDC we agreed on removing the hack, but wanted to add a warning
print first for a few releases, to really catch if there are no users
anymore.
Thx
P.
> ---
> Changes in v3:
> - Add a READ_ONCE() + comment to make the warning slightly less
> horrible.
>
> Changes in v2:
> - Fix broken brackets.
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 46119aacb809..31039b08c7b9 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1419,7 +1419,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched)
> struct drm_sched_rq *rq = sched->sched_rq[i];
>
> spin_lock(&rq->lock);
> - list_for_each_entry(s_entity, &rq->entities, list)
> + list_for_each_entry(s_entity, &rq->entities, list) {
> /*
> * Prevents reinsertion and marks job_queue as idle,
> * it will be removed from the rq in drm_sched_entity_fini()
> @@ -1440,8 +1440,15 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched)
> * For now, this remains a potential race in all
> * drivers that keep entities alive for longer than
> * the scheduler.
> + *
> + * The READ_ONCE() is there to make the lockless read
> + * (warning about the lockless write below) slightly
> + * less broken...
> */
> + if (!READ_ONCE(s_entity->stopped))
> + dev_warn(sched->dev, "Tearing down scheduler with active entities!\n");
> s_entity->stopped = true;
> + }
> spin_unlock(&rq->lock);
> kfree(sched->sched_rq[i]);
> }
Powered by blists - more mailing lists