[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191007125014.12595-1-steven.price@arm.com>
Date: Mon, 7 Oct 2019 13:50:14 +0100
From: Steven Price <steven.price@....com>
To: Daniel Vetter <daniel@...ll.ch>, David Airlie <airlied@...ux.ie>,
Rob Herring <robh@...nel.org>,
Tomeu Vizoso <tomeu.vizoso@...labora.com>
Cc: Alyssa Rosenzweig <alyssa.rosenzweig@...labora.com>,
Steven Price <steven.price@....com>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
Neil Armstrong <narmstrong@...libre.com>
Subject: [PATCH] drm/panfrost: Handle resetting on timeout better
Panfrost uses multiple schedulers (one for each slot, so 2 in reality),
and on a timeout has to stop all the schedulers to safely perform a
reset. However more than one scheduler can trigger a timeout at the same
time. This race condition results in jobs being freed while they are
still in use.
When stopping other slots use cancel_delayed_work_sync() to ensure that
any timeout started for that slot has completed. Also use
mutex_trylock() to obtain reset_lock. This means that only one thread
attempts the reset, the other threads will simply complete without doing
anything (the first thread will wait for this in the call to
cancel_delayed_work_sync()).
While we're here and since the function is already dependent on
sched_job not being NULL, let's remove the unnecessary checks, along
with a commented out call to panfrost_core_dump() which has never
existed in mainline.
Signed-off-by: Steven Price <steven.price@....com>
---
This is a tidied up version of the patch orginally posted here:
http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index a58551668d9a..dcc9a7603685 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
job_read(pfdev, JS_TAIL_LO(js)),
sched_job);
- mutex_lock(&pfdev->reset_lock);
+ if (!mutex_trylock(&pfdev->reset_lock))
+ return;
- for (i = 0; i < NUM_JOB_SLOTS; i++)
- drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
+ for (i = 0; i < NUM_JOB_SLOTS; i++) {
+ struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
+
+ drm_sched_stop(sched, sched_job);
+ if (js != i)
+ /* Ensure any timeouts on other slots have finished */
+ cancel_delayed_work_sync(&sched->work_tdr);
+ }
- if (sched_job)
- drm_sched_increase_karma(sched_job);
+ drm_sched_increase_karma(sched_job);
spin_lock_irqsave(&pfdev->js->job_lock, flags);
for (i = 0; i < NUM_JOB_SLOTS; i++) {
@@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
}
spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
- /* panfrost_core_dump(pfdev); */
panfrost_devfreq_record_transition(pfdev, js);
panfrost_device_reset(pfdev);
--
2.20.1
Powered by blists - more mailing lists