[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190730165618.10122-11-digetx@gmail.com>
Date: Tue, 30 Jul 2019 19:56:13 +0300
From: Dmitry Osipenko <digetx@...il.com>
To: Rob Herring <robh+dt@...nel.org>,
Michael Turquette <mturquette@...libre.com>,
Joseph Lo <josephl@...dia.com>,
Thierry Reding <thierry.reding@...il.com>,
Jonathan Hunter <jonathanh@...dia.com>,
Peter De Schrijver <pdeschrijver@...dia.com>,
Prashant Gaikwad <pgaikwad@...dia.com>,
Stephen Boyd <sboyd@...nel.org>
Cc: devicetree@...r.kernel.org, linux-clk@...r.kernel.org,
linux-tegra@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH v9 10/15] dt-bindings: memory: Add binding for NVIDIA Tegra30 Memory Controller
Add binding for the NVIDIA Tegra30 SoC Memory Controller.
Signed-off-by: Dmitry Osipenko <digetx@...il.com>
---
.../memory-controllers/nvidia,tegra30-mc.yaml | 173 ++++++++++++++++++
1 file changed, 173 insertions(+)
create mode 100644 Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-mc.yaml
diff --git a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-mc.yaml b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-mc.yaml
new file mode 100644
index 000000000000..40e63cdf836b
--- /dev/null
+++ b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-mc.yaml
@@ -0,0 +1,173 @@
+# SPDX-License-Identifier: (GPL-2.0)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/memory-controllers/nvidia,tegra30-mc.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: NVIDIA Tegra30 SoC Memory Controller
+
+maintainers:
+ - Dmitry Osipenko <digetx@...il.com>
+ - Jon Hunter <jonathanh@...dia.com>
+ - Thierry Reding <thierry.reding@...il.com>
+
+description: |
+ Tegra30 Memory Controller architecturally consists of the following parts:
+
+ Arbitration Domains, which can handle a single request or response per
+ clock from a group of clients. Typically, a system has a single Arbitration
+ Domain, but an implementation may divide the client space into multiple
+ Arbitration Domains to increase the effective system bandwidth.
+
+ Protocol Arbiter, which manage a related pool of memory devices. A system
+ may have a single Protocol Arbiter or multiple Protocol Arbiters.
+
+ Memory Crossbar, which routes request and responses between Arbitration
+ Domains and Protocol Arbiters. In the simplest version of the system, the
+ Memory Crossbar is just a pass through between a single Arbitration Domain
+ and a single Protocol Arbiter.
+
+ Global Resources, which include things like configuration registers which
+ are shared across the Memory Subsystem.
+
+ The Tegra30 Memory Controller handles memory requests from internal clients
+ and arbitrates among them to allocate memory bandwidth for DDR3L and LPDDR2
+ SDRAMs.
+
+properties:
+ compatible:
+ const: nvidia,tegra30-mc
+
+ reg:
+ maxItems: 1
+ description:
+ Physical base address.
+
+ clocks:
+ maxItems: 1
+ description:
+ Memory Controller clock.
+
+ clock-names:
+ items:
+ - const: mc
+
+ interrupts:
+ maxItems: 1
+ description:
+ Memory Controller interrupt.
+
+ "#reset-cells":
+ const: 1
+
+ "#iommu-cells":
+ const: 1
+
+patternProperties:
+ "^emc-timings-[0-9]+$":
+ type: object
+ properties:
+ nvidia,ram-code:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description:
+ Value of RAM_CODE this timing set is used for.
+
+ patternProperties:
+ "^timing-[0-9]+$":
+ type: object
+ properties:
+ clock-frequency:
+ description:
+ Memory clock rate in Hz.
+ minimum: 1000000
+ maximum: 900000000
+
+ nvidia,emem-configuration:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ description: |
+ Values to be written to the EMEM register block. See section
+ "18.13.1 MC Registers" in the TRM.
+ items:
+ - description: MC_EMEM_ARB_CFG
+ - description: MC_EMEM_ARB_OUTSTANDING_REQ
+ - description: MC_EMEM_ARB_TIMING_RCD
+ - description: MC_EMEM_ARB_TIMING_RP
+ - description: MC_EMEM_ARB_TIMING_RC
+ - description: MC_EMEM_ARB_TIMING_RAS
+ - description: MC_EMEM_ARB_TIMING_FAW
+ - description: MC_EMEM_ARB_TIMING_RRD
+ - description: MC_EMEM_ARB_TIMING_RAP2PRE
+ - description: MC_EMEM_ARB_TIMING_WAP2PRE
+ - description: MC_EMEM_ARB_TIMING_R2R
+ - description: MC_EMEM_ARB_TIMING_W2W
+ - description: MC_EMEM_ARB_TIMING_R2W
+ - description: MC_EMEM_ARB_TIMING_W2R
+ - description: MC_EMEM_ARB_DA_TURNS
+ - description: MC_EMEM_ARB_DA_COVERS
+ - description: MC_EMEM_ARB_MISC0
+ - description: MC_EMEM_ARB_RING1_THROTTLE
+
+ required:
+ - clock-frequency
+ - nvidia,emem-configuration
+
+ additionalProperties: false
+
+ required:
+ - nvidia,ram-code
+
+ additionalProperties: false
+
+required:
+ - compatible
+ - reg
+ - interrupts
+ - clocks
+ - clock-names
+ - "#reset-cells"
+ - "#iommu-cells"
+
+additionalProperties: false
+
+examples:
+ - |
+ memory-controller@...0f000 {
+ compatible = "nvidia,tegra30-mc";
+ reg = <0x7000f000 0x400>;
+ clocks = <&tegra_car 32>;
+ clock-names = "mc";
+
+ interrupts = <0 77 4>;
+
+ #iommu-cells = <1>;
+ #reset-cells = <1>;
+
+ emc-timings-1 {
+ nvidia,ram-code = <1>;
+
+ timing-667000000 {
+ clock-frequency = <667000000>;
+
+ nvidia,emem-configuration = <
+ 0x0000000a /* MC_EMEM_ARB_CFG */
+ 0xc0000079 /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000003 /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000004 /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000010 /* MC_EMEM_ARB_TIMING_RC */
+ 0x0000000b /* MC_EMEM_ARB_TIMING_RAS */
+ 0x0000000a /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000001 /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000003 /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x0000000b /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000002 /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000002 /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000004 /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000008 /* MC_EMEM_ARB_TIMING_W2R */
+ 0x08040202 /* MC_EMEM_ARB_DA_TURNS */
+ 0x00130b10 /* MC_EMEM_ARB_DA_COVERS */
+ 0x70ea1f11 /* MC_EMEM_ARB_MISC0 */
+ 0x001f0000 /* MC_EMEM_ARB_RING1_THROTTLE */
+ >;
+ };
+ };
+ };
--
2.22.0
Powered by blists - more mailing lists