[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f577af9-9ff7-4e9d-b198-02727a995393@amd.com>
Date: Thu, 8 Nov 2018 16:10:17 +0000
From: "Koenig, Christian" <Christian.Koenig@....com>
To: Eric Anholt <eric@...olt.net>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Nayan Deshmukh <nayan26deshmukh@...il.com>,
"Deucher, Alexander" <Alexander.Deucher@....com>
Subject: Re: [PATCH 1/2] Revert "drm/sched: fix timeout handling v2"
Am 08.11.18 um 17:04 schrieb Eric Anholt:
> This reverts commit 0efd2d2f68cd5dbddf4ecd974c33133257d16a8e. Fixes
> this failure in V3D GPU reset:
>
> [ 1418.227796] Unable to handle kernel NULL pointer dereference at virtual address 00000018
> [ 1418.235947] pgd = dc4c55ca
> [ 1418.238695] [00000018] *pgd=80000040004003, *pmd=00000000
> [ 1418.244132] Internal error: Oops: 206 [#1] SMP ARM
> [ 1418.248934] Modules linked in:
> [ 1418.252001] CPU: 0 PID: 10253 Comm: kworker/0:0 Not tainted 4.19.0-rc6+ #486
> [ 1418.259058] Hardware name: Broadcom STB (Flattened Device Tree)
> [ 1418.265002] Workqueue: events drm_sched_job_timedout
> [ 1418.269986] PC is at dma_fence_remove_callback+0x8/0x50
> [ 1418.275218] LR is at drm_sched_job_timedout+0x4c/0x118
> ...
> [ 1418.415891] [<c086b754>] (dma_fence_remove_callback) from [<c06e7e6c>] (drm_sched_job_timedout+0x4c/0x118)
> [ 1418.425571] [<c06e7e6c>] (drm_sched_job_timedout) from [<c0242500>] (process_one_work+0x2c8/0x7bc)
> [ 1418.434552] [<c0242500>] (process_one_work) from [<c0242a38>] (worker_thread+0x44/0x590)
> [ 1418.442663] [<c0242a38>] (worker_thread) from [<c0249b10>] (kthread+0x160/0x168)
> [ 1418.450076] [<c0249b10>] (kthread) from [<c02010ac>] (ret_from_fork+0x14/0x28)
>
> Cc: Christian König <christian.koenig@....com>
> Cc: Nayan Deshmukh <nayan26deshmukh@...il.com>
> Cc: Alex Deucher <alexander.deucher@....com>
> Signed-off-by: Eric Anholt <eric@...olt.net>
Well NAK. The problem here is that fence->parent is NULL which is most
likely caused by an issue somewhere else.
We could easily work around that with an extra NULL check, but reverting
the patch would break GPU recovery again.
Christian.
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 30 +-------------------------
> 1 file changed, 1 insertion(+), 29 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 44fe587aaef9..bd7d11c47202 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -249,41 +249,13 @@ static void drm_sched_job_timedout(struct work_struct *work)
> {
> struct drm_gpu_scheduler *sched;
> struct drm_sched_job *job;
> - int r;
>
> sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
> -
> - spin_lock(&sched->job_list_lock);
> - list_for_each_entry_reverse(job, &sched->ring_mirror_list, node) {
> - struct drm_sched_fence *fence = job->s_fence;
> -
> - if (!dma_fence_remove_callback(fence->parent, &fence->cb))
> - goto already_signaled;
> - }
> -
> job = list_first_entry_or_null(&sched->ring_mirror_list,
> struct drm_sched_job, node);
> - spin_unlock(&sched->job_list_lock);
>
> if (job)
> - sched->ops->timedout_job(job);
> -
> - spin_lock(&sched->job_list_lock);
> - list_for_each_entry(job, &sched->ring_mirror_list, node) {
> - struct drm_sched_fence *fence = job->s_fence;
> -
> - if (!fence->parent || !list_empty(&fence->cb.node))
> - continue;
> -
> - r = dma_fence_add_callback(fence->parent, &fence->cb,
> - drm_sched_process_job);
> - if (r)
> - drm_sched_process_job(fence->parent, &fence->cb);
> -
> -already_signaled:
> - ;
> - }
> - spin_unlock(&sched->job_list_lock);
> + job->sched->ops->timedout_job(job);
> }
>
> /**
Powered by blists - more mailing lists