[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAF6AEGv8sYG=72ne4wMx_OQwWOUkx88fYdKM2EEszdmYzOrg1A@mail.gmail.com>
Date: Thu, 4 Aug 2022 10:33:16 -0700
From: Rob Clark <robdclark@...il.com>
To: Akhil P Oommen <quic_akhilpo@...cinc.com>
Cc: dri-devel@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org,
freedreno@...ts.freedesktop.org,
Rob Clark <robdclark@...omium.org>,
Abhinav Kumar <quic_abhinavk@...cinc.com>,
Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
Sean Paul <sean@...rly.run>, David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] drm/msm: Move hangcheck timer restart
On Thu, Aug 4, 2022 at 12:53 AM Akhil P Oommen <quic_akhilpo@...cinc.com> wrote:
>
> On 8/4/2022 1:59 AM, Rob Clark wrote:
> > On Wed, Aug 3, 2022 at 12:52 PM Akhil P Oommen <quic_akhilpo@...cinc.com> wrote:
> >> On 8/3/2022 10:53 PM, Rob Clark wrote:
> >>> From: Rob Clark <robdclark@...omium.org>
> >>>
> >>> Don't directly restart the hangcheck timer from the timer handler, but
> >>> instead start it after the recover_worker replays remaining jobs.
> >>>
> >>> If the kthread is blocked for other reasons, there is no point to
> >>> immediately restart the timer. Fixes a random symptom of the problem
> >>> fixed in the next patch.
> >>>
> >>> Signed-off-by: Rob Clark <robdclark@...omium.org>
> >>> ---
> >>> drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
> >>> 1 file changed, 9 insertions(+), 5 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> >>> index fba85f894314..8f9c48eabf7d 100644
> >>> --- a/drivers/gpu/drm/msm/msm_gpu.c
> >>> +++ b/drivers/gpu/drm/msm/msm_gpu.c
> >>> @@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
> >>> }
> >>>
> >>> static void retire_submits(struct msm_gpu *gpu);
> >>> +static void hangcheck_timer_reset(struct msm_gpu *gpu);
> >>>
> >>> static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
> >>> {
> >>> @@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
> >>> }
> >>>
> >>> if (msm_gpu_active(gpu)) {
> >>> + bool restart_hangcheck = false;
> >>> +
> >>> /* retire completed submits, plus the one that hung: */
> >>> retire_submits(gpu);
> >>>
> >>> @@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
> >>> unsigned long flags;
> >>>
> >>> spin_lock_irqsave(&ring->submit_lock, flags);
> >>> - list_for_each_entry(submit, &ring->submits, node)
> >>> + list_for_each_entry(submit, &ring->submits, node) {
> >>> gpu->funcs->submit(gpu, submit);
> >>> + restart_hangcheck = true;
> >>> + }
> >>> spin_unlock_irqrestore(&ring->submit_lock, flags);
> >>> }
> >>> +
> >>> + if (restart_hangcheck)
> >>> + hangcheck_timer_reset(gpu);
> >>> }
> >>>
> >>> mutex_unlock(&gpu->lock);
> >>> @@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
> >>> kthread_queue_work(gpu->worker, &gpu->recover_work);
> >>> }
> >>>
> >>> - /* if still more pending work, reset the hangcheck timer: */
> >> In the scenario mentioned here, shouldn't we restart the timer?
> > yeah, actually the case where we don't want to restart the timer is
> > *only* when we schedule recover_work..
> >
> > BR,
> > -R
> Not sure if your codebase is different but based on msm-next branch,
> when "if (fence != ring->hangcheck_fence)" is true, we now skip
> rescheduling the timer. I don't think that is what we want. There should
> be a hangcheck timer running as long as there is an active submit,
> unless we have scheduled a recover_work here.
>
right, v2 will change that to only skip rescheduling the timer in the
recover path
BR,
-R
> -Akhil.
> >
> >> -Akhil.
> >>> - if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
> >>> - hangcheck_timer_reset(gpu);
> >>> -
> >>> /* workaround for missing irq: */
> >>> msm_gpu_retire(gpu);
> >>> }
> >>>
>
Powered by blists - more mailing lists