[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4fae2d1e-e399-9e2b-60dc-b8a78333845f@collabora.com>
Date: Mon, 7 Oct 2019 09:14:04 -0700
From: Tomeu Vizoso <tomeu.vizoso@...labora.com>
To: Neil Armstrong <narmstrong@...libre.com>,
Steven Price <steven.price@....com>,
Daniel Vetter <daniel@...ll.ch>,
David Airlie <airlied@...ux.ie>, Rob Herring <robh@...nel.org>
Cc: Alyssa Rosenzweig <alyssa.rosenzweig@...labora.com>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] drm/panfrost: Handle resetting on timeout better
On 10/7/19 6:09 AM, Neil Armstrong wrote:
> Hi Steven,
>
> On 07/10/2019 14:50, Steven Price wrote:
>> Panfrost uses multiple schedulers (one for each slot, so 2 in reality),
>> and on a timeout has to stop all the schedulers to safely perform a
>> reset. However more than one scheduler can trigger a timeout at the same
>> time. This race condition results in jobs being freed while they are
>> still in use.
>>
>> When stopping other slots use cancel_delayed_work_sync() to ensure that
>> any timeout started for that slot has completed. Also use
>> mutex_trylock() to obtain reset_lock. This means that only one thread
>> attempts the reset, the other threads will simply complete without doing
>> anything (the first thread will wait for this in the call to
>> cancel_delayed_work_sync()).
>>
>> While we're here and since the function is already dependent on
>> sched_job not being NULL, let's remove the unnecessary checks, along
>> with a commented out call to panfrost_core_dump() which has never
>> existed in mainline.
>>
>
> A Fixes: tags would be welcome here so it would be backported to v5.3
>
>> Signed-off-by: Steven Price <steven.price@....com>
>> ---
>> This is a tidied up version of the patch orginally posted here:
>> http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com
>>
>> drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------
>> 1 file changed, 11 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>> index a58551668d9a..dcc9a7603685 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>> @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
>> job_read(pfdev, JS_TAIL_LO(js)),
>> sched_job);
>>
>> - mutex_lock(&pfdev->reset_lock);
>> + if (!mutex_trylock(&pfdev->reset_lock))
>> + return;
>>
>> - for (i = 0; i < NUM_JOB_SLOTS; i++)
>> - drm_sched_stop(&pfdev->js->queue[i].sched, sched_job);
>> + for (i = 0; i < NUM_JOB_SLOTS; i++) {
>> + struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched;
>> +
>> + drm_sched_stop(sched, sched_job);
>> + if (js != i)
>> + /* Ensure any timeouts on other slots have finished */
>> + cancel_delayed_work_sync(&sched->work_tdr);
>> + }
>>
>> - if (sched_job)
>> - drm_sched_increase_karma(sched_job);
>> + drm_sched_increase_karma(sched_job);
>
> Indeed looks cleaner.
>
>>
>> spin_lock_irqsave(&pfdev->js->job_lock, flags);
>> for (i = 0; i < NUM_JOB_SLOTS; i++) {
>> @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
>> }
>> spin_unlock_irqrestore(&pfdev->js->job_lock, flags);
>>
>> - /* panfrost_core_dump(pfdev); */
>
> This should be cleaned in another patch !
Seems to me that this should be some kind of TODO, see
etnaviv_core_dump() for the kind of things we could be doing.
Maybe we can delete this line and mention this in the TODO file?
Cheers,
Tomeu
>>
>> panfrost_devfreq_record_transition(pfdev, js);
>> panfrost_device_reset(pfdev);
>>
>
> Thanks,
> Testing it right now with the last change removed (doesn't apply on v5.3 with it),
> results in a few hours... or minutes !
>
>
> Neil
>
Powered by blists - more mailing lists