[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8cc1a92d-aa47-f330-21da-eb61601d47d2@collabora.com>
Date: Mon, 2 Jan 2023 18:01:02 +0300
From: Dmitry Osipenko <dmitry.osipenko@...labora.com>
To: Luben Tuikov <luben.tuikov@....com>,
Christian König <christian.koenig@....com>,
David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>,
"Guilherme G. Piccoli" <gpiccoli@...lia.com>
Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] drm/scheduler: Fix lockup in drm_sched_entity_kill()
On 11/23/22 03:13, Dmitry Osipenko wrote:
> The drm_sched_entity_kill() is invoked twice by drm_sched_entity_destroy()
> while userspace process is exiting or being killed. First time it's invoked
> when sched entity is flushed and second time when entity is released. This
> causes a lockup within wait_for_completion(entity_idle) due to how completion
> API works.
>
> Calling wait_for_completion() more times than complete() was invoked is a
> error condition that causes lockup because completion internally uses
> counter for complete/wait calls. The complete_all() must be used instead
> in such cases.
>
> This patch fixes lockup of Panfrost driver that is reproducible by killing
> any application in a middle of 3d drawing operation.
>
> Fixes: 2fdb8a8f07c2 ("drm/scheduler: rework entity flush, kill and fini")
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@...labora.com>
> ---
> drivers/gpu/drm/scheduler/sched_entity.c | 2 +-
> drivers/gpu/drm/scheduler/sched_main.c | 4 ++--
> 2 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
> index fe09e5be79bd..15d04a0ec623 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -81,7 +81,7 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
> init_completion(&entity->entity_idle);
>
> /* We start in an idle state. */
> - complete(&entity->entity_idle);
> + complete_all(&entity->entity_idle);
>
> spin_lock_init(&entity->rq_lock);
> spsc_queue_init(&entity->job_queue);
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 6ce04c2e90c0..857ec20be9e8 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1026,7 +1026,7 @@ static int drm_sched_main(void *param)
> sched_job = drm_sched_entity_pop_job(entity);
>
> if (!sched_job) {
> - complete(&entity->entity_idle);
> + complete_all(&entity->entity_idle);
> continue;
> }
>
> @@ -1037,7 +1037,7 @@ static int drm_sched_main(void *param)
>
> trace_drm_run_job(sched_job, entity);
> fence = sched->ops->run_job(sched_job);
> - complete(&entity->entity_idle);
> + complete_all(&entity->entity_idle);
> drm_sched_fence_scheduled(s_fence);
>
> if (!IS_ERR_OR_NULL(fence)) {
Applied to drm-misc-next-fixes
--
Best regards,
Dmitry
Powered by blists - more mailing lists