lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 20 Jun 2019 22:14:44 -0400
From:   Brian Masney <masneyb@...tation.org>
To:     Rob Clark <robdclark@...il.com>
Cc:     Rob Herring <robh+dt@...nel.org>, Andy Gross <agross@...nel.org>,
        David Brown <david.brown@...aro.org>,
        Sean Paul <sean@...rly.run>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        David Airlie <airlied@...ux.ie>,
        Daniel Vetter <daniel@...ll.ch>,
        Mark Rutland <mark.rutland@....com>,
        Jonathan Marek <jonathan@...ek.ca>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        freedreno <freedreno@...ts.freedesktop.org>,
        devicetree@...r.kernel.org
Subject: Re: [PATCH 2/6] dt-bindings: display: msm: gmu: add optional ocmem
 property

On Wed, Jun 19, 2019 at 01:21:20PM -0700, Rob Clark wrote:
> On Wed, Jun 19, 2019 at 1:17 PM Rob Herring <robh+dt@...nel.org> wrote:
> >
> > On Sun, Jun 16, 2019 at 7:29 AM Brian Masney <masneyb@...tation.org> wrote:
> > >
> > > Some A3xx and A4xx Adreno GPUs do not have GMEM inside the GPU core and
> > > must use the On Chip MEMory (OCMEM) in order to be functional. Add the
> > > optional ocmem property to the Adreno Graphics Management Unit bindings.
> > >
> > > Signed-off-by: Brian Masney <masneyb@...tation.org>
> > > ---
> > >  Documentation/devicetree/bindings/display/msm/gmu.txt | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > >
> > > diff --git a/Documentation/devicetree/bindings/display/msm/gmu.txt b/Documentation/devicetree/bindings/display/msm/gmu.txt
> > > index 90af5b0a56a9..c746b95e95d4 100644
> > > --- a/Documentation/devicetree/bindings/display/msm/gmu.txt
> > > +++ b/Documentation/devicetree/bindings/display/msm/gmu.txt
> > > @@ -31,6 +31,10 @@ Required properties:
> > >  - iommus: phandle to the adreno iommu
> > >  - operating-points-v2: phandle to the OPP operating points
> > >
> > > +Optional properties:
> > > +- ocmem: phandle to the On Chip Memory (OCMEM) that's present on some Snapdragon
> > > +         SoCs. See Documentation/devicetree/bindings/soc/qcom/qcom,ocmem.yaml.
> >
> > We already have a couple of similar properties. Lets standardize on
> > 'sram' as that is what TI already uses.
> >
> > Also, is the whole OCMEM allocated to the GMU? If not you should have
> > child nodes to subdivide the memory.
> >
> 
> iirc, downstream a large chunk of OCMEM is statically allocated for
> GPU.. the remainder is dynamically allocated for different use-cases.
> The upstream driver Brian is proposing only handles the static
> allocation case

It appears that the GPU expects to use a specific region of ocmem,
specifically starting at 0. The freedreno driver allocates 1MB of
ocmem on the Nexus 5 starting at ocmem address 0. As a test, I
changed the starting address to 0.5MB and kmscube shows only half the
cube, and four wide black bars across the screen:

https://www.flickr.com/photos/masneyb/48100534381/

> (and I don't think we have upstream support for the various audio and
> video use-cases that used dynamic OCMEM allocation downstream)

That's my understanding as well.

> Although maybe we should still have a child node to separate the
> statically and dynamically allocated parts?  I'm not sure what would
> make the most sense..

Given that the GPU is expecting a fixed address in ocmem, perhaps it
makes sense to have the child node. How about this based on the
sram/sram.txt bindings?

  ocmem: ocmem@...00000 {
    compatible = "qcom,msm8974-ocmem";

    reg = <0xfdd00000 0x2000>, <0xfec00000 0x180000>;
    reg-names = "ctrl", "mem";

    clocks = <&rpmcc RPM_SMD_OCMEMGX_CLK>, <&mmcc OCMEMCX_OCMEMNOC_CLK>;
    clock-names = "core", "iface";

    gmu-sram@0 {
      reg = <0x0 0x100000>;
      pool;
    };

    misc-sram@0 {
      reg = <0x100000 0x080000>;
      export;
    };
  };

I marked the misc pool as export since I've seen in the downstream ocmem
sources a reference to their closed libsensors that runs in userspace.

Looking at the sram bindings led me to the genalloc API
(Documentation/core-api/genalloc.rst). I wonder if this is the way that
this should be done?

Brian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ