[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <734c1bbdb4c38dbb4fb8c681d7aec2f1232a488c.camel@redhat.com>
Date: Thu, 09 Jan 2025 14:47:50 +0100
From: Philipp Stanner <pstanner@...hat.com>
To: Tvrtko Ursulin <tursulin@...ulin.net>, Philipp Stanner
<phasta@...nel.org>, Luben Tuikov <ltuikov89@...il.com>, Matthew Brost
<matthew.brost@...el.com>, Danilo Krummrich <dakr@...nel.org>, Maarten
Lankhorst <maarten.lankhorst@...ux.intel.com>, Maxime Ripard
<mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>, David Airlie
<airlied@...il.com>, Simona Vetter <simona@...ll.ch>, Sumit Semwal
<sumit.semwal@...aro.org>, Christian König
<christian.koenig@....com>
Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
linux-media@...r.kernel.org, linaro-mm-sig@...ts.linaro.org
Subject: Re: [PATCH 1/3] drm/sched: Document run_job() refcount hazard
On Thu, 2025-01-09 at 13:44 +0000, Tvrtko Ursulin wrote:
>
> On 09/01/2025 13:37, Philipp Stanner wrote:
> > From: Philipp Stanner <pstanner@...hat.com>
> >
> > drm_sched_backend_ops.run_job() returns a dma_fence for the
> > scheduler.
> > That fence is signalled by the driver once the hardware completed
> > the
> > associated job. The scheduler does not increment the reference
> > count on
> > that fence, but implicitly expects to inherit this fence from
> > run_job().
> >
> > This is relatively subtle and prone to misunderstandings.
> >
> > This implies that, to keep a reference for itself, a driver needs
> > to
> > call dma_fence_get() in addition to dma_fence_init() in that
> > callback.
> >
> > It's further complicated by the fact that the scheduler even
> > decrements
> > the refcount in drm_sched_run_job_work() since it created a new
> > reference in drm_sched_fence_scheduled(). It does, however, still
> > use
> > its pointer to the fence after calling dma_fence_put() - which is
> > safe
> > because of the aforementioned new reference, but actually still
> > violates
> > the refcounting rules.
> >
> > Improve the explanatory comment for that decrement.
> >
> > Move the call to dma_fence_put() to the position behind the last
> > usage
> > of the fence.
> >
> > Document the necessity to increment the reference count in
> > drm_sched_backend_ops.run_job().
> >
> > Signed-off-by: Philipp Stanner <pstanner@...hat.com>
> > ---
> > drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++---
> > include/drm/gpu_scheduler.h | 19 +++++++++++++++----
> > 2 files changed, 22 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > b/drivers/gpu/drm/scheduler/sched_main.c
> > index 57da84908752..5f46c01eb01e 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -1218,15 +1218,19 @@ static void drm_sched_run_job_work(struct
> > work_struct *w)
> > drm_sched_fence_scheduled(s_fence, fence);
> >
> > if (!IS_ERR_OR_NULL(fence)) {
> > - /* Drop for original kref_init of the fence */
> > - dma_fence_put(fence);
> > -
> > r = dma_fence_add_callback(fence, &sched_job->cb,
> > drm_sched_job_done_cb);
> > if (r == -ENOENT)
> > drm_sched_job_done(sched_job, fence-
> > >error);
> > else if (r)
> > DRM_DEV_ERROR(sched->dev, "fence add
> > callback failed (%d)\n", r);
> > +
> > + /*
> > + * s_fence took a new reference to fence in the
> > call to
> > + * drm_sched_fence_scheduled() above. The
> > reference passed by
> > + * run_job() above is now not needed any longer.
> > Drop it.
> > + */
> > + dma_fence_put(fence);
> > } else {
> > drm_sched_job_done(sched_job, IS_ERR(fence) ?
> > PTR_ERR(fence) : 0);
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index 95e17504e46a..d5cd2a78f27c 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -420,10 +420,21 @@ struct drm_sched_backend_ops {
> > struct drm_sched_entity
> > *s_entity);
> >
> > /**
> > - * @run_job: Called to execute the job once all of the
> > dependencies
> > - * have been resolved. This may be called multiple times,
> > if
> > - * timedout_job() has happened and
> > drm_sched_job_recovery()
> > - * decides to try it again.
> > + * @run_job: Called to execute the job once all of the
> > dependencies
> > + * have been resolved. This may be called multiple times,
> > if
> > + * timedout_job() has happened and
> > drm_sched_job_recovery() decides to
> > + * try it again.
> > + *
> > + * @sched_job: the job to run
> > + *
> > + * Returns: dma_fence the driver must signal once the
> > hardware has
> > + * completed the job ("hardware fence").
> > + *
> > + * Note that the scheduler expects to 'inherit' its own
> > reference to
> > + * this fence from the callback. It does not invoke an
> > extra
> > + * dma_fence_get() on it. Consequently, this callback must
> > take a
> > + * reference for the scheduler, and additional ones for
> > the driver's
> > + * respective needs.
>
> Another thing which would be really good to document here is what
> other
> things run_job can return apart from the fence.
>
> For instance amdgpu can return an ERR_PTR but I haven't looked into
> other drivers.
ERR_PTR and NULL are both fine.
But good catch, I'll add that in v2
P.
>
> Regards,
>
> Tvrtko
>
> > */
> > struct dma_fence *(*run_job)(struct drm_sched_job
> > *sched_job);
> >
>
Powered by blists - more mailing lists