[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF6AEGsJpLwyXK7_TH0jZx64A1rOX9F23dL5TZUJUBV=tsKLCA@mail.gmail.com>
Date: Tue, 24 Sep 2024 09:11:44 -0700
From: Rob Clark <robdclark@...il.com>
To: Connor Abbott <cwabbott0@...il.com>
Cc: dri-devel@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org,
freedreno@...ts.freedesktop.org, Akhil P Oommen <quic_akhilpo@...cinc.com>,
Rob Clark <robdclark@...omium.org>, Sean Paul <sean@...rly.run>,
Konrad Dybcio <konradybcio@...nel.org>, Abhinav Kumar <quic_abhinavk@...cinc.com>,
Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
Marijn Suijten <marijn.suijten@...ainline.org>, David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>, open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] drm/msm/a6xx+: Insert a fence wait before SMMU table update
On Wed, Sep 18, 2024 at 9:51 AM Connor Abbott <cwabbott0@...il.com> wrote:
>
> On Fri, Sep 13, 2024 at 8:51 PM Rob Clark <robdclark@...il.com> wrote:
> >
> > From: Rob Clark <robdclark@...omium.org>
> >
> > The CP_SMMU_TABLE_UPDATE _should_ be waiting for idle, but on some
> > devices (x1-85, possibly others), it seems to pass that barrier while
> > there are still things in the event completion FIFO waiting to be
> > written back to memory.
> >
> > Work around that by adding a fence wait before context switch. The
> > CP_EVENT_WRITE that writes the fence is the last write from a submit,
> > so seeing this value hit memory is a reliable indication that it is
> > safe to proceed with the context switch.
> >
> > Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/63
> > Signed-off-by: Rob Clark <robdclark@...omium.org>
> > ---
> > drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 14 +++++++++++---
> > 1 file changed, 11 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > index bcaec86ac67a..ba5b35502e6d 100644
> > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > @@ -101,9 +101,10 @@ static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
> > }
> >
> > static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> > - struct msm_ringbuffer *ring, struct msm_file_private *ctx)
> > + struct msm_ringbuffer *ring, struct msm_gem_submit *submit)
> > {
> > bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
> > + struct msm_file_private *ctx = submit->queue->ctx;
> > struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> > phys_addr_t ttbr;
> > u32 asid;
> > @@ -115,6 +116,13 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> > if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))
> > return;
> >
> > + /* Wait for previous submit to complete before continuing: */
> > + OUT_PKT7(ring, CP_WAIT_TIMESTAMP, 4);
>
> CP_WAIT_TIMESTAMP doesn't exist on a6xx, so this won't work there. I
> don't know if the bug exists on a6xx, but I'd be inclined to say it
> has always existed and we just never hit it because it requires some
> very specific timing conditions. We can make it work on a6xx by using
> CP_WAIT_REG_MEM and waiting for it to equal the exact value.
I've been unable to reproduce this on a690, despite running at a
similar fps (so similar rate of CP_SMMU_TABLE_UPDATEs). I guess I
can't rule out that this is just _harder_ to hit on a6xx due to the
shallower pipeline. It would be nice to get some data points on other
a7xx, but I only have the one.
I did attempt to come up with an igt stand-alone reproducer by just
ping-ponging between contexts (with fences to force context switches)
with 1000's of CP_EVENT_WRITE's, to no avail. I guess I'd need to
actually setup a blit or draw to make the event-write asynchronous,
but that would be a lot harder to do in igt.
I guess for now I'll re-work this patch to only do the workaround on
a7xx. And wire up the gallium preemption support so we can confirm
whether this is also an issue for preemption.
BR,
-R
> Connor
>
> > + OUT_RING(ring, 0);
> > + OUT_RING(ring, lower_32_bits(rbmemptr(ring, fence)));
> > + OUT_RING(ring, upper_32_bits(rbmemptr(ring, fence)));
> > + OUT_RING(ring, submit->seqno - 1);
> > +
> > if (!sysprof) {
> > if (!adreno_is_a7xx(adreno_gpu)) {
> > /* Turn off protected mode to write to special registers */
> > @@ -193,7 +201,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > struct msm_ringbuffer *ring = submit->ring;
> > unsigned int i, ibs = 0;
> >
> > - a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx);
> > + a6xx_set_pagetable(a6xx_gpu, ring, submit);
> >
> > get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP(0),
> > rbmemptr_stats(ring, index, cpcycles_start));
> > @@ -283,7 +291,7 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
> > OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR);
> >
> > - a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx);
> > + a6xx_set_pagetable(a6xx_gpu, ring, submit);
> >
> > get_stats_counter(ring, REG_A7XX_RBBM_PERFCTR_CP(0),
> > rbmemptr_stats(ring, index, cpcycles_start));
> > --
> > 2.46.0
> >
Powered by blists - more mailing lists