lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Dec 2022 21:32:30 +0530
From:   Sumit Gupta <sumitg@...dia.com>
To:     <treding@...dia.com>, <krzysztof.kozlowski@...aro.org>,
        <dmitry.osipenko@...labora.com>, <viresh.kumar@...aro.org>,
        <rafael@...nel.org>, <jonathanh@...dia.com>, <robh+dt@...nel.org>,
        <linux-kernel@...r.kernel.org>, <linux-tegra@...r.kernel.org>,
        <linux-pm@...r.kernel.org>, <devicetree@...r.kernel.org>
CC:     <sanjayc@...dia.com>, <ksitaraman@...dia.com>, <ishah@...dia.com>,
        <bbasu@...dia.com>, <sumitg@...dia.com>
Subject: [Patch v1 00/10] Tegra234 Memory interconnect support

This patch series adds memory interconnect support for Tegra234 SoC.
It is used to dynamically scale DRAM Frequency as per the bandwidth
requests from different Memory Controller (MC) clients.
MC Clients use ICC Framework's icc_set_bw() api to dynamically request
for the DRAM bandwidth (BW). As per path, the request will be routed
from MC to the EMC driver. EMC driver will then send the Client ID,
type, and frequency request info to the BPMP-FW which will set the
final DRAM freq considering all exisiting requests.

MC and EMC are the ICC providers. Nodes in path for a request will be:
     Client[1-n] -> MC -> EMC -> EMEM/DRAM

The patch series also adds interconnect support in the CPUFREQ driver
for scaling bandwidth with CPU frequency. For that, added per cluster
OPP table in the CPUFREQ driver and using that to scale DRAM freq by
requesting the minimum BW respective to the given CPU frequency in OPP
table for that cluster.

Sumit Gupta (10):
  memory: tegra: add interconnect support for DRAM scaling in Tegra234
  memory: tegra: adding iso mc clients for Tegra234
  memory: tegra: add pcie mc clients for Tegra234
  memory: tegra: add support for software mc clients in Tegra234
  dt-bindings: tegra: add icc ids for dummy MC clients
  arm64: tegra: Add cpu OPP tables and interconnects property
  cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist
  cpufreq: tegra194: add OPP support and set bandwidth
  memory: tegra: get number of enabled mc channels
  memory: tegra: make cluster bw request a multiple of mc_channels

 arch/arm64/boot/dts/nvidia/tegra234.dtsi | 276 +++++++++++
 drivers/cpufreq/cpufreq-dt-platdev.c     |   1 +
 drivers/cpufreq/tegra194-cpufreq.c       | 152 +++++-
 drivers/memory/tegra/mc.c                |  80 +++-
 drivers/memory/tegra/mc.h                |   1 +
 drivers/memory/tegra/tegra186-emc.c      | 166 +++++++
 drivers/memory/tegra/tegra234.c          | 565 ++++++++++++++++++++++-
 include/dt-bindings/memory/tegra234-mc.h |   5 +
 include/soc/tegra/mc.h                   |  11 +
 include/soc/tegra/tegra-icc.h            |  79 ++++
 10 files changed, 1312 insertions(+), 24 deletions(-)
 create mode 100644 include/soc/tegra/tegra-icc.h

-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ