[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnvKa29EceUyZ62U@phenom.ffwll.local>
Date: Wed, 26 Jun 2024 09:59:39 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Konrad Dybcio <konrad.dybcio@...aro.org>
Cc: Rob Clark <robdclark@...il.com>, Sean Paul <sean@...rly.run>,
Abhinav Kumar <quic_abhinavk@...cinc.com>,
Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
David Airlie <airlied@...il.com>, Daniel Vetter <daniel@...ll.ch>,
Marijn Suijten <marijn.suijten@...ainline.org>,
linux-arm-msm@...r.kernel.org, dri-devel@...ts.freedesktop.org,
freedreno@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
Akhil P Oommen <quic_akhilpo@...cinc.com>
Subject: Re: [PATCH v2 1/2] drm/msm/adreno: De-spaghettify the use of memory
barriers
On Tue, Jun 25, 2024 at 08:54:41PM +0200, Konrad Dybcio wrote:
> Memory barriers help ensure instruction ordering, NOT time and order
> of actual write arrival at other observers (e.g. memory-mapped IP).
> On architectures employing weak memory ordering, the latter can be a
> giant pain point, and it has been as part of this driver.
>
> Moreover, the gpu_/gmu_ accessors already use non-relaxed versions of
> readl/writel, which include r/w (respectively) barriers.
>
> Replace the barriers with a readback (or drop altogether where possible)
> that ensures the previous writes have exited the write buffer (as the CPU
> must flush the write to the register it's trying to read back).
>
> Signed-off-by: Konrad Dybcio <konrad.dybcio@...aro.org>
Some in pci these readbacks are actually part of the spec and called
posting reads. I'd very much recommend drivers create a small wrapper
function for these cases with a void return value, because it makes the
code so much more legible and easier to understand.
-Sima
> ---
> drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 4 +---
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 10 ++++++----
> 2 files changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> index 0e3dfd4c2bc8..09d640165b18 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> @@ -466,9 +466,7 @@ static int a6xx_rpmh_start(struct a6xx_gmu *gmu)
> int ret;
> u32 val;
>
> - gmu_write(gmu, REG_A6XX_GMU_RSCC_CONTROL_REQ, 1 << 1);
> - /* Wait for the register to finish posting */
> - wmb();
> + gmu_write(gmu, REG_A6XX_GMU_RSCC_CONTROL_REQ, BIT(1));
>
> ret = gmu_poll_timeout(gmu, REG_A6XX_GMU_RSCC_CONTROL_ACK, val,
> val & (1 << 1), 100, 10000);
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> index c98cdb1e9326..4083d0cad782 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> @@ -855,14 +855,16 @@ static int hw_init(struct msm_gpu *gpu)
> /* Clear GBIF halt in case GX domain was not collapsed */
> if (adreno_is_a619_holi(adreno_gpu)) {
> gpu_write(gpu, REG_A6XX_GBIF_HALT, 0);
> + gpu_read(gpu, REG_A6XX_GBIF_HALT);
> +
> gpu_write(gpu, REG_A6XX_RBBM_GPR0_CNTL, 0);
> - /* Let's make extra sure that the GPU can access the memory.. */
> - mb();
> + gpu_read(gpu, REG_A6XX_RBBM_GPR0_CNTL);
> } else if (a6xx_has_gbif(adreno_gpu)) {
> gpu_write(gpu, REG_A6XX_GBIF_HALT, 0);
> + gpu_read(gpu, REG_A6XX_GBIF_HALT);
> +
> gpu_write(gpu, REG_A6XX_RBBM_GBIF_HALT, 0);
> - /* Let's make extra sure that the GPU can access the memory.. */
> - mb();
> + gpu_read(gpu, REG_A6XX_RBBM_GBIF_HALT);
> }
>
> /* Some GPUs are stubborn and take their sweet time to unhalt GBIF! */
>
> --
> 2.45.2
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists