[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Zn0bEMMam4_VPFoc@phenom.ffwll.local>
Date: Thu, 27 Jun 2024 09:56:00 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Akhil P Oommen <quic_akhilpo@...cinc.com>
Cc: Daniel Vetter <daniel@...ll.ch>,
Konrad Dybcio <konrad.dybcio@...aro.org>,
Rob Clark <robdclark@...il.com>, Sean Paul <sean@...rly.run>,
Abhinav Kumar <quic_abhinavk@...cinc.com>,
Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
David Airlie <airlied@...il.com>,
Marijn Suijten <marijn.suijten@...ainline.org>,
linux-arm-msm@...r.kernel.org, dri-devel@...ts.freedesktop.org,
freedreno@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] drm/msm/adreno: De-spaghettify the use of memory
barriers
On Thu, Jun 27, 2024 at 02:54:57AM +0530, Akhil P Oommen wrote:
> On Wed, Jun 26, 2024 at 09:59:39AM +0200, Daniel Vetter wrote:
> > On Tue, Jun 25, 2024 at 08:54:41PM +0200, Konrad Dybcio wrote:
> > > Memory barriers help ensure instruction ordering, NOT time and order
> > > of actual write arrival at other observers (e.g. memory-mapped IP).
> > > On architectures employing weak memory ordering, the latter can be a
> > > giant pain point, and it has been as part of this driver.
> > >
> > > Moreover, the gpu_/gmu_ accessors already use non-relaxed versions of
> > > readl/writel, which include r/w (respectively) barriers.
> > >
> > > Replace the barriers with a readback (or drop altogether where possible)
> > > that ensures the previous writes have exited the write buffer (as the CPU
> > > must flush the write to the register it's trying to read back).
> > >
> > > Signed-off-by: Konrad Dybcio <konrad.dybcio@...aro.org>
> >
> > Some in pci these readbacks are actually part of the spec and called
> > posting reads. I'd very much recommend drivers create a small wrapper
> > function for these cases with a void return value, because it makes the
> > code so much more legible and easier to understand.
>
> For Adreno which is configured via mmio, we don't need to do this often. GBIF_HALT
> is a scenario where we need to be extra careful as it can potentially cause some
> internal lockup. Another scenario I can think of is GPU soft reset where need to
> keep a delay on cpu side after triggering. We should closely scrutinize any
> other instance that comes up. So I feel a good justification as a comment here
> would be enough, to remind the reader. Think of it as a way to discourage the
> use by making it hard.
>
> This is a bit subjective, I am fine if you have a strong opinion on this.
Eh it's up to you, but "we don't do this often" is a reason to make them
stand out even more. Similar reasons why cpu memory barriers must all have
a comment, to explain what they're synchronizing against.
Up to you if you just want a comment rule or make them stand out even more
with an explicit name (and still have the comment rule) that's different
from normal reads. Again comparing to cpu barriers, the nice thing is that
they're (in most cases at least, unless you do really scary stuff) very
easy to spot in the code and the ring alarm bells when doing reviews.
-Sima
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists