[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240625-adreno_barriers-v2-1-c01f2ef4b62a@linaro.org>
Date: Tue, 25 Jun 2024 20:54:41 +0200
From: Konrad Dybcio <konrad.dybcio@...aro.org>
To: Rob Clark <robdclark@...il.com>, Sean Paul <sean@...rly.run>,
Abhinav Kumar <quic_abhinavk@...cinc.com>,
Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
David Airlie <airlied@...il.com>, Daniel Vetter <daniel@...ll.ch>
Cc: Marijn Suijten <marijn.suijten@...ainline.org>,
linux-arm-msm@...r.kernel.org, dri-devel@...ts.freedesktop.org,
freedreno@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
Akhil P Oommen <quic_akhilpo@...cinc.com>,
Konrad Dybcio <konrad.dybcio@...aro.org>
Subject: [PATCH v2 1/2] drm/msm/adreno: De-spaghettify the use of memory
barriers
Memory barriers help ensure instruction ordering, NOT time and order
of actual write arrival at other observers (e.g. memory-mapped IP).
On architectures employing weak memory ordering, the latter can be a
giant pain point, and it has been as part of this driver.
Moreover, the gpu_/gmu_ accessors already use non-relaxed versions of
readl/writel, which include r/w (respectively) barriers.
Replace the barriers with a readback (or drop altogether where possible)
that ensures the previous writes have exited the write buffer (as the CPU
must flush the write to the register it's trying to read back).
Signed-off-by: Konrad Dybcio <konrad.dybcio@...aro.org>
---
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 4 +---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 10 ++++++----
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 0e3dfd4c2bc8..09d640165b18 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -466,9 +466,7 @@ static int a6xx_rpmh_start(struct a6xx_gmu *gmu)
int ret;
u32 val;
- gmu_write(gmu, REG_A6XX_GMU_RSCC_CONTROL_REQ, 1 << 1);
- /* Wait for the register to finish posting */
- wmb();
+ gmu_write(gmu, REG_A6XX_GMU_RSCC_CONTROL_REQ, BIT(1));
ret = gmu_poll_timeout(gmu, REG_A6XX_GMU_RSCC_CONTROL_ACK, val,
val & (1 << 1), 100, 10000);
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index c98cdb1e9326..4083d0cad782 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -855,14 +855,16 @@ static int hw_init(struct msm_gpu *gpu)
/* Clear GBIF halt in case GX domain was not collapsed */
if (adreno_is_a619_holi(adreno_gpu)) {
gpu_write(gpu, REG_A6XX_GBIF_HALT, 0);
+ gpu_read(gpu, REG_A6XX_GBIF_HALT);
+
gpu_write(gpu, REG_A6XX_RBBM_GPR0_CNTL, 0);
- /* Let's make extra sure that the GPU can access the memory.. */
- mb();
+ gpu_read(gpu, REG_A6XX_RBBM_GPR0_CNTL);
} else if (a6xx_has_gbif(adreno_gpu)) {
gpu_write(gpu, REG_A6XX_GBIF_HALT, 0);
+ gpu_read(gpu, REG_A6XX_GBIF_HALT);
+
gpu_write(gpu, REG_A6XX_RBBM_GBIF_HALT, 0);
- /* Let's make extra sure that the GPU can access the memory.. */
- mb();
+ gpu_read(gpu, REG_A6XX_RBBM_GBIF_HALT);
}
/* Some GPUs are stubborn and take their sweet time to unhalt GBIF! */
--
2.45.2
Powered by blists - more mailing lists