[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAAhSdy3VVbOjMZR_De3D6hC0XwgjwpRzxU=Xpf=OOULfTYOxOw@mail.gmail.com>
Date: Fri, 24 May 2024 18:24:20 +0530
From: Anup Patel <anup@...infault.org>
To: Ulf Hansson <ulf.hansson@...aro.org>
Cc: Anup Patel <apatel@...tanamicro.com>, Nick Hu <nick.hu@...ive.com>, palmer@...belt.com,
rafael@...nel.org, daniel.lezcano@...aro.org, paul.walmsley@...ive.com,
linux-pm@...r.kernel.org, linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org, zong.li@...ive.com,
Cyan Yang <cyan.yang@...ive.com>
Subject: Re: [PATCH] cpuidle: riscv-sbi: Add cluster_pm_enter()/exit()
On Fri, May 24, 2024 at 4:11 PM Ulf Hansson <ulf.hansson@...aro.org> wrote:
>
> On Fri, 17 May 2024 at 06:39, Anup Patel <apatel@...tanamicro.com> wrote:
> >
> > On Thu, May 16, 2024 at 9:40 AM Nick Hu <nick.hu@...ive.com> wrote:
> > >
> > > Hi Anup
> > >
> > > On Wed, May 15, 2024 at 9:46 PM Anup Patel <anup@...infault.org> wrote:
> > > >
> > > > Hi Nick,
> > > >
> > > > On Wed, May 15, 2024 at 5:45 PM Nick Hu <nick.hu@...ive.com> wrote:
> > > > >
> > > > > Hi Anup,
> > > > >
> > > > > Thank you for your guidance.
> > > > > After enabling the debug message, we found a way to solve the problem
> > > > > by the following steps:
> > > > > 1. Add a compatible string in 'power-domains' node otherwise it won't
> > > > > be the supplier of the consumers. (See of_link_to_phandle())
> > > >
> > > > Hmm, requiring a compatible string is odd. Where should we document
> > > > this compatible string ?
> > > >
> > > Sorry, this is my fault. I didn't include some updates in
> > > of_link_to_phandle(). This led some misunderstandings here.
> > > You are right, we don't need it.
> > > The supplier will be linked to the CLUSTER_PD node.
> > >
> > > > > 2. Move the 'power-domains' node outside the 'cpus' node otherwise it
> > > > > won't be added to the device hierarchy by device_add().
> > > > > 3. Update the cpuidle-riscv-sbi driver to get the pds_node from
> > > > > '/power-domains'.
> > > >
> > > > By adding a compatible string and moving the "power-domains" node
> > > > outside, you are simply forcing the OF framework to populate devices.
> > > >
> > > > How about manually creating platform_device for each power-domain
> > > > DT node using of_platform_device_create() in sbi_pd_init() ?
> > > >
> > > Thanks for the suggestion! We have test the solution and it could work.
> > > We was wondering if it's feasible for us to relocate the
> > > 'power-domains' node outside of the /cpus? The CLUSTER_PD might
> > > encompass not only the CPUs but also other components within the
> > > cluster.
> >
> > The cpuidle-riscv-sbi driver expects "power-domains" DT node
> > under "/cpus" DT node because this driver only deals with power
> > domains related to CPU cluster or CPU cache-hierarchy. It does
> > make sense to define L2/L3 power domains under
> > "/cpus/power-domain" since these are related to CPUs.
> >
> > Moving the CPU "power-domains" DT node directly under "/" or
> > somewhere else would mean that it covers system-wide power
> > domains which is not true.
>
> I understand your point, but I am not convinced that the power-domains
> need to belong to the "cpus" node. Ideally, the power-domain describes
> the power-rail and the interface to manage the CPUs, this can surely
> be described outside the "cpus" node - even if there are only CPUs
> that are using it.
>
> Moreover, moving forward, one should not be surprised if it turns out
> that a platform has other devices than the CPUs, sharing the same
> power-rail as the CPU cluster. At least, we have that for arm/psci
> [1].
For non-CPU power domains, we are working on a messaging
specification (RPMI) [1]. The supervisor software might have
direct access to a RPMI transport or it can send RPMI messages
via SBI MPXY extension [2].
If power-rails on a platform are shared between CPUs and
devices then the platform can:
1) Use SBI HSM for CPUs and use RPMI for devices. The
DT bindings for device power-domains based on RPMI are
still work-in-progress. If there are multiple supervisor domains
(aka system level partitions) created by SBI implementation or
some partitioning hypervisor then the RPMI messages can be
arbitraged by SBI implementation using SBI MPXY extension.
The SBI MPXY extension also allows sharing the same RPMI
transport between machine-mode (firmware) and supervisor-mode.
2) Use its own platform specific power-domain driver for both
CPUs and devices (basically don't use the SBI HSM and RPMI).
In this case, there won't be any controlled access (or arbitration)
of power rails across supervisor domains.
>
> >
> > I suggest we continue using "/cpus/power-domains" DT node
> > only for power domains related to CPU clusters or CPU
> > cache-hierarchy.
> >
> > For system wide power domains of SoC devices, we can either:
> > 1) Use device power domains through the SBI MPXY extension
> > via different driver
> > 2) Use a platform specific driver
> >
> > >
> > > We also look at cpuidle_psci_domain driver and it seems Arm doesn't
> > > create the devices for each subnode of psci domain.
> > > Is there any reason that they don't need it?
>
> We don't need it for arm as we have a separate node for PSCI and its
> power-domains [2]. Moreover, we have a separate driver that manages
> the power-domain (cpuidle-psci-domain).
Unlike the ARM world, we don't have any DT node for SBI in
the RISC-V world because the SBI is always there. Due to this,
the SBI HSM CPU idle driver (this driver) currently looks for
CPU "power-domains" under "/cpus" DT node because the
SBI HSM extension only deals with CPU states.
>
> [...]
>
> [1] arch/arm64/boot/dts/qcom/sc7280.dtsi (search for "CLUSTER_PD")
> [2] Documentation/devicetree/bindings/arm/psci.yaml
>
> Kind regards
> Uffe
[1] https://github.com/riscv-non-isa/riscv-rpmi
[2] https://docs.google.com/document/d/1Ivej3u6uQgVdJHnjrbqgUwE1Juy75d4uYCjWrdNjeAg/edit?usp=sharing
Best regards,
Anup
Powered by blists - more mailing lists