lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230511173211.9127-1-sumitg@nvidia.com>
Date:   Thu, 11 May 2023 23:02:03 +0530
From:   Sumit Gupta <sumitg@...dia.com>
To:     <treding@...dia.com>, <krzysztof.kozlowski@...aro.org>,
        <dmitry.osipenko@...labora.com>, <viresh.kumar@...aro.org>,
        <rafael@...nel.org>, <jonathanh@...dia.com>, <robh+dt@...nel.org>,
        <lpieralisi@...nel.org>, <helgaas@...nel.org>
CC:     <linux-kernel@...r.kernel.org>, <linux-tegra@...r.kernel.org>,
        <linux-pm@...r.kernel.org>, <devicetree@...r.kernel.org>,
        <linux-pci@...r.kernel.org>, <mmaddireddy@...dia.com>,
        <kw@...ux.com>, <bhelgaas@...gle.com>, <vidyas@...dia.com>,
        <sanjayc@...dia.com>, <ksitaraman@...dia.com>, <ishah@...dia.com>,
        <bbasu@...dia.com>, <sumitg@...dia.com>
Subject: [Patch v8 0/8] Tegra234 Memory interconnect support

Hi All,

Have incorporated the suggestions in v7. Only changed 'patch 1' in v7
to fix possible race when setting 'mc->bpmp' as pointed by Krzysztof.
Requesting to merge the patch series.

Thank you,
Sumit Gupta
============

This patch series adds memory interconnect support for Tegra234 SoC.
It is used to dynamically scale DRAM Frequency as per the bandwidth
requests from different Memory Controller (MC) clients.
MC Clients use ICC Framework's icc_set_bw() api to dynamically request
for the DRAM bandwidth (BW). As per path, the request will be routed
from MC to the EMC driver. MC driver passes the request info like the
Client ID, type, and frequency request info to the BPMP-FW which will
set the final DRAM freq considering all exisiting requests.

MC and EMC are the ICC providers. Nodes in path for a request will be:
     Client[1-n] -> MC -> EMC -> EMEM/DRAM

The patch series also adds interconnect support in below clients:
1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
   add per cluster OPP table which will be used in the CPUFREQ driver
   by requesting the minimum BW respective to the given CPU frequency
   in the OPP table of it's cluster.
2) PCIE driver to request BW required for different modes.

---
v7[7] -> v8:
- fixed possible race and compiler re-ordering when setting 'mc->bpmp'.

v6[6] -> v7:
- removed 'Patch 6' from v6 as it has been applied in next by 'Viresh'.
- changed commit message of PCI patches as suggested by 'Lorenzo'.
- added ACK from 'Lorenzo' in both PCI patches.

v5[5] -> v6:
- split PCI patch into two 'patch 7 & 8' as suggested by 'Lorenzo'.
- added more info into the commit description of PCI patches.

v4[4] -> v5:
- dropped 'patch 1 & 2' from v4 which added "nvidia,bpmp" in MC node.
- save BPMP ref from EMC node into 'mc->bpmp' and use it in MC driver.
- added check for array bounds violation in pci suggested by Bjorn.
- moved DT patch having OPP table to the last 'Patch 8'.
- did multiple small changes and cleanup suggested by Krzysztof in v4.

v3[3] -> v4:
- dropped 'patch 1' from v3 which returns 'struct tegra_bpmp *'.
- added 'patch 1 & 2' to get bpmp ref using 'nvidia,bpmp' property.
- dropped 'patch 10 & 11' from v3 and added those changes in 'patch 3'.
- added static to prototype of 'tegra_cpufreq_init_cpufreq_table()' to
  fix the warning reported by 'kernel test robot'.

v2[2] -> v3:
- in 'patch 7', set 'icc_dram_bw_scaling' to false if set_opp call failed
  to avoid flooding of uart with 'Failed to set bw' messages.
- added 'patch 10' to handle if the bpmp-fw is old and not support bwmgr mrq.
- added 'patch 11' to fix interconnect registration in tegra186-emc.
  ref patch link in linux next:
  [https://lore.kernel.org/all/20230306075651.2449-21-johan+linaro@kernel.org/]

v1[1] -> v2:
- moved BW setting to tegra234_mc_icc_set() from EMC driver.
- moved sw clients to the 'tegra_mc_clients' table.
- point 'node->data' to the entry within 'tegra_mc_clients'.
- removed 'struct tegra_icc_node' and get client info using 'node->data'.
- changed error handling in and around tegra_emc_interconnect_init().
- moved 'tegra-icc.h' from 'include/soc/tegra' to 'include/linux'.
- added interconnect support to PCIE driver in 'Patch 9'.
- merged 'Patch 9 & 10' from [1] to get num_channels and use.
- merged 'Patch 2 & 3' from [1] to add ISO and NISO clients.
- added 'Acked-by' of Krzysztof from 'Patch 05/10' of [1].
- Removed 'Patch 7' from [1] as that is merged now.

Sumit Gupta (8):
  memory: tegra: add interconnect support for DRAM scaling in Tegra234
  memory: tegra: add mc clients for Tegra234
  memory: tegra: add software mc clients in Tegra234
  dt-bindings: tegra: add icc ids for dummy MC clients
  memory: tegra: make cpu cluster bw request a multiple of mc channels
  PCI: tegra194: Fix possible array out of bounds access
  PCI: tegra194: Add interconnect support in Tegra234
  arm64: tegra: Add cpu OPP tables and interconnects property

 arch/arm64/boot/dts/nvidia/tegra234.dtsi   | 276 ++++++++++
 drivers/memory/tegra/mc.c                  |  24 +
 drivers/memory/tegra/mc.h                  |   1 +
 drivers/memory/tegra/tegra186-emc.c        | 133 +++++
 drivers/memory/tegra/tegra234.c            | 595 ++++++++++++++++++++-
 drivers/pci/controller/dwc/pcie-tegra194.c |  44 +-
 include/dt-bindings/memory/tegra234-mc.h   |   5 +
 include/linux/tegra-icc.h                  |  65 +++
 include/soc/tegra/mc.h                     |   8 +
 9 files changed, 1142 insertions(+), 9 deletions(-)
 create mode 100644 include/linux/tegra-icc.h

[1] https://lore.kernel.org/lkml/20221220160240.27494-1-sumitg@nvidia.com/
[2] https://lore.kernel.org/lkml/20230220140559.28289-1-sumitg@nvidia.com/
[3] https://lore.kernel.org/lkml/20230320182441.11904-1-sumitg@nvidia.com/
[4] https://lore.kernel.org/lkml/20230327161426.32639-1-sumitg@nvidia.com/
[5] https://lore.kernel.org/lkml/20230330133354.714-1-sumitg@nvidia.com/
[6] https://lore.kernel.org/lkml/20230411110002.19824-1-sumitg@nvidia.com/
[7] https://lore.kernel.org/lkml/20230424131337.20151-1-sumitg@nvidia.com/ 

-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ