[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240121095604.2368-1-hdanton@sina.com>
Date: Sun, 21 Jan 2024 17:56:04 +0800
From: Hillf Danton <hdanton@...a.com>
To: Erico Nunes <nunes.erico@...il.com>
Cc: Qiang Yu <yuq825@...il.com>,
dri-devel@...ts.freedesktop.org,
lima@...ts.freedesktop.org,
David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 4/6] drm/lima: handle spurious timeouts due to high irq latency
On Wed, 17 Jan 2024 04:12:10 +0100 Erico Nunes <nunes.erico@...il.com>
>
> @@ -401,9 +399,33 @@ static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job
> struct lima_sched_pipe *pipe = to_lima_pipe(job->sched);
> struct lima_sched_task *task = to_lima_task(job);
> struct lima_device *ldev = pipe->ldev;
> + struct lima_ip *ip = pipe->processor[0];
> +
> + /*
> + * If the GPU managed to complete this jobs fence, the timeout is
> + * spurious. Bail out.
> + */
> + if (dma_fence_is_signaled(task->done_fence)) {
> + DRM_WARN("%s spurious timeout\n", lima_ip_name(ip));
> + return DRM_GPU_SCHED_STAT_NOMINAL;
> + }
Given 500ms in lima_sched_pipe_init(), no timeout is spurious by define,
and stop selling bandaid like this because you have options like locating
the reasons behind timeout.
> +
> + /*
> + * Lima IRQ handler may take a long time to process an interrupt
> + * if there is another IRQ handler hogging the processing.
> + * In order to catch such cases and not report spurious Lima job
> + * timeouts, synchronize the IRQ handler and re-check the fence
> + * status.
> + */
> + synchronize_irq(ip->irq);
> +
> + if (dma_fence_is_signaled(task->done_fence)) {
> + DRM_WARN("%s unexpectedly high interrupt latency\n", lima_ip_name(ip));
> + return DRM_GPU_SCHED_STAT_NOMINAL;
> + }
>
> if (!pipe->error)
> - DRM_ERROR("lima job timeout\n");
> + DRM_ERROR("%s lima job timeout\n", lima_ip_name(ip));
>
> drm_sched_stop(&pipe->base, &task->base);
>
Powered by blists - more mailing lists