lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9343165f-2533-f363-4942-719ee9b7d2a4@amd.com>
Date:   Sat, 15 Jul 2023 03:14:10 -0400
From:   Luben Tuikov <luben.tuikov@....com>
To:     Asahi Lina <lina@...hilina.net>, David Airlie <airlied@...il.com>,
        Daniel Vetter <daniel@...ll.ch>,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Christian König <christian.koenig@....com>
Cc:     Faith Ekstrand <faith.ekstrand@...labora.com>,
        Alyssa Rosenzweig <alyssa@...enzweig.io>,
        dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
        linux-media@...r.kernel.org, asahi@...ts.linux.dev
Subject: Re: [PATCH 3/3] drm/scheduler: Clean up jobs when the scheduler is
 torn down.

On 2023-07-14 04:21, Asahi Lina wrote:
> drm_sched_fini() currently leaves any pending jobs dangling, which
> causes segfaults and other badness when job completion fences are
> signaled after the scheduler is torn down.

If there are pending jobs, ideally we want to call into the driver,
so that it can release resources it may be holding for those.
The idea behind "pending" is that they are pending in the hardware
and we don't know their state until signalled/the callback called.
(Or unless the device is reset and we get a notification of that fact.)

> Explicitly detach all jobs from their completion callbacks and free
> them. This makes it possible to write a sensible safe abstraction for
> drm_sched, without having to externally duplicate the tracking of
> in-flight jobs.
> 
> This shouldn't regress any existing drivers, since calling
> drm_sched_fini() with any pending jobs is broken and this change should
> be a no-op if there are no pending jobs.

While this statement is true on its own, it kind of contradicts
the premise of the first paragraph.

> Signed-off-by: Asahi Lina <lina@...hilina.net>
> ---
>  drivers/gpu/drm/scheduler/sched_main.c | 32 ++++++++++++++++++++++++++++++--
>  1 file changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 1f3bc3606239..a4da4aac0efd 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1186,10 +1186,38 @@ EXPORT_SYMBOL(drm_sched_init);
>  void drm_sched_fini(struct drm_gpu_scheduler *sched)
>  {
>  	struct drm_sched_entity *s_entity;
> +	struct drm_sched_job *s_job, *tmp;
>  	int i;
>  
> -	if (sched->thread)
> -		kthread_stop(sched->thread);
> +	if (!sched->thread)
> +		return;
> +
> +	/*
> +	 * Stop the scheduler, detaching all jobs from their hardware callbacks
> +	 * and cleaning up complete jobs.
> +	 */
> +	drm_sched_stop(sched, NULL);
> +
> +	/*
> +	 * Iterate through the pending job list and free all jobs.
> +	 * This assumes the driver has either guaranteed jobs are already stopped, or that
> +	 * otherwise it is responsible for keeping any necessary data structures for
> +	 * in-progress jobs alive even when the free_job() callback is called early (e.g. by
> +	 * putting them in its own queue or doing its own refcounting).
> +	 */
> +	list_for_each_entry_safe(s_job, tmp, &sched->pending_list, list) {
> +		spin_lock(&sched->job_list_lock);
> +		list_del_init(&s_job->list);
> +		spin_unlock(&sched->job_list_lock);
> +
> +		dma_fence_set_error(&s_job->s_fence->finished, -ESRCH);
> +		drm_sched_fence_finished(s_job->s_fence);

I'd imagine it's better to rebase this on top of drm-misc-next where
drm_sched_fence_finished() takes one more parameter--the error.

> +
> +		WARN_ON(s_job->s_fence->parent);
> +		sched->ops->free_job(s_job);
> +	}
> +
> +	kthread_stop(sched->thread);
>  
>  	for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
>  		struct drm_sched_rq *rq = &sched->sched_rq[i];
> 

Conceptually I don't mind this patch--I see what it is trying to achieve,
but technically, we want the driver to detect GPU removal and return shared
resources back, such as "jobs", which DRM is also aware of.

In the case where we're initiating the tear, we should notify the driver that
we're about to forget jobs (resources), so that it knows to return them back
or that it shouldn't notify us for them (since we've notified we're forgetting them.)

(Note also that in this latter case, traditionally, the device would be reset,
so that we can guarantee that it has forgotten all shared resources which
we are to tear up. This is somewhat more complicated with GPUs, thus the method
pointed out above.)
-- 
Regards,
Luben

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ