lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5c4c610e-26ec-447c-b4db-ad38e994720b@amd.com>
Date: Fri, 20 Dec 2024 13:53:34 +0100
From: Christian König <christian.koenig@....com>
To: Philipp Stanner <phasta@...nel.org>, Luben Tuikov <ltuikov89@...il.com>,
 Matthew Brost <matthew.brost@...el.com>, Danilo Krummrich <dakr@...nel.org>,
 Philipp Stanner <pstanner@...hat.com>,
 Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
 Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
 David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
 Sumit Semwal <sumit.semwal@...aro.org>
Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
 linux-media@...r.kernel.org, linaro-mm-sig@...ts.linaro.org,
 Tvrtko Ursulin <tursulin@...ulin.net>,
 Andrey Grodzovsky <andrey.grodzovsky@....com>
Subject: Re: [PATCH] drm/sched: Document run_job() refcount hazard

Am 20.12.24 um 13:45 schrieb Philipp Stanner:
> From: Philipp Stanner <pstanner@...hat.com>
>
> drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler.
> That fence is signalled by the driver once the hardware completed the
> associated job. The scheduler does not increment the reference count on
> that fence, but implicitly expects to inherit this fence from run_job().
>
> This is relatively subtle and prone to misunderstandings.
>
> This implies that, to keep a reference for itself, a driver needs to
> call dma_fence_get() in addition to dma_fence_init() in that callback.
>
> It's further complicated by the fact that the scheduler even decrements
> the refcount in drm_sched_run_job_work() since it created a new
> reference in drm_sched_fence_scheduled(). It does, however, still use
> its pointer to the fence after calling dma_fence_put() - which is safe
> because of the aforementioned new reference, but actually still violates
> the refcounting rules.
>
> Improve the explanatory comment for that decrement.
>
> Move the call to dma_fence_put() to the position behind the last usage
> of the fence.
>
> Document the necessity to increment the reference count in
> drm_sched_backend_ops.run_job().
>
> Cc: Christian König <christian.koenig@....com>
> Cc: Tvrtko Ursulin <tursulin@...ulin.net>
> Cc: Andrey Grodzovsky <andrey.grodzovsky@....com>
> Signed-off-by: Philipp Stanner <pstanner@...hat.com>
> ---
>   drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++---
>   include/drm/gpu_scheduler.h            | 20 ++++++++++++++++----
>   2 files changed, 23 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 7ce25281c74c..d6f8df39d848 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1218,15 +1218,19 @@ static void drm_sched_run_job_work(struct work_struct *w)
>   	drm_sched_fence_scheduled(s_fence, fence);
>   
>   	if (!IS_ERR_OR_NULL(fence)) {
> -		/* Drop for original kref_init of the fence */
> -		dma_fence_put(fence);
> -
>   		r = dma_fence_add_callback(fence, &sched_job->cb,
>   					   drm_sched_job_done_cb);
>   		if (r == -ENOENT)
>   			drm_sched_job_done(sched_job, fence->error);
>   		else if (r)
>   			DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r);
> +
> +		/*
> +		 * s_fence took a new reference to fence in the call to
> +		 * drm_sched_fence_scheduled() above. The reference passed by
> +		 * run_job() above is now not needed any longer. Drop it.
> +		 */
> +		dma_fence_put(fence);
>   	} else {
>   		drm_sched_job_done(sched_job, IS_ERR(fence) ?
>   				   PTR_ERR(fence) : 0);
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 95e17504e46a..a1f5c9a14278 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -420,10 +420,22 @@ struct drm_sched_backend_ops {
>   					 struct drm_sched_entity *s_entity);
>   
>   	/**
> -         * @run_job: Called to execute the job once all of the dependencies
> -         * have been resolved.  This may be called multiple times, if
> -	 * timedout_job() has happened and drm_sched_job_recovery()
> -	 * decides to try it again.
> +	 * @run_job: Called to execute the job once all of the dependencies
> +	 * have been resolved. This may be called multiple times, if
> +	 * timedout_job() has happened and drm_sched_job_recovery() decides to
> +	 * try it again.

Maybe we should improve that here as well while at it.

That drm_sched_job_recovery() can call this multiple times actually goes 
against the dma_fence rules since drivers can't easily allocate a new HW 
fence.

Something like "The deprecated drm_sched_job_recovery() function might 
call this again, but it is strongly advised to not be used because it 
violates dma_fence memory allocations rules."

On the other hand can of course be a separate patch.

> +	 *
> +	 * @sched_job: the job to run
> +	 *
> +	 * Returns: dma_fence the driver must signal once the hardware has
> +	 *	completed the job ("hardware fence").
> +	 *
> +	 * Note that the scheduler expects to 'inherit' its own reference to
> +	 * this fence from the callback. It does not invoke an extra
> +	 * dma_fence_get() on it. Consequently, this callback must return a
> +	 * fence whose refcount is at least 2: One for the scheduler's
> +	 * reference returned here, another one for the reference kept by the
> +	 * driver.

Well the driver actually doesn't need any extra reference. The scheduler 
just needs to guarantee that this reference isn't dropped before it is 
signaled.

Regards,
Christian.

>   	 */
>   	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
>   


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ