[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8c461b2e-7057-4974-bfd4-7215ec2855f1@oss.qualcomm.com>
Date: Fri, 31 Oct 2025 18:16:29 +0800
From: yuanfang zhang <yuanfang.zhang@....qualcomm.com>
To: Mike Leach <mike.leach@...aro.org>
Cc: Suzuki K Poulose <suzuki.poulose@....com>,
        James Clark <james.clark@...aro.org>, Rob Herring <robh@...nel.org>,
        Krzysztof Kozlowski <krzk+dt@...nel.org>,
        Conor Dooley
 <conor+dt@...nel.org>,
        Mathieu Poirier <mathieu.poirier@...aro.org>,
        Leo Yan <leo.yan@...ux.dev>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Bjorn Andersson <andersson@...nel.org>,
        Konrad Dybcio <konradybcio@...nel.org>, kernel@....qualcomm.com,
        coresight@...ts.linaro.org, linux-arm-kernel@...ts.infradead.org,
        devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-arm-msm@...r.kernel.org, Jie Gan <jie.gan@....qualcomm.com>
Subject: Re: [PATCH 00/12] coresight: Add CPU cluster funnel/replicator/tmc
 support
On 10/30/2025 5:58 PM, Mike Leach wrote:
> Hi,
> 
> On Thu, 30 Oct 2025 at 07:51, yuanfang zhang
> <yuanfang.zhang@....qualcomm.com> wrote:
>>
>>
>>
>> On 10/29/2025 7:01 PM, Mike Leach wrote:
>>> Hi,
>>>
>>> This entire set seems to initially check the generic power domain for
>>> a list of associated CPUs, then check CPU state for all other
>>> operations.
>>>
>>> Why not simply use the generic power domain state itself, along with
>>> the power up / down notifiers to determine if the registers are safe
>>> to access? If the genpd is powered up then the registers must be safe
>>> to access?
>>>
>>> Regards
>>>
>>> Mike
>>>
>>
>> Hi Mike,
>>
>> Hi,
>>
>> yes, when genpd is powered up then register can be accessed. but have blow problems:
>>
> 
> The point I was making is to use genpd / notifications for determine
> if the device is powered so you know if it is safe to use registers.
> This is different from the faults you mention below in your power
> infrastructure.
> You are reading the dev->pm_domain to extract the cpu map then the
> notfiers and state must also be available. If you could use CPUHP
> notifiers then genpd notifiers would also work.
> However, one issue I do see with this is that there is no code added
> to the driver to associate the dev with the pm_domain which would
> normally be there so I am unclear how this actually works.
> 
Device power domain attach is operation on bus leave code. using smp caller
can wake up the cluster power, and pm_runtime_sync can block cluster power down.
this approach ensures cluster power on after enable.
Power management is operation at cpu source code. when cpu enter LPM, cpu lpm notifier of 
per-cpu source will disable the path and source, disable path will call pm_runtime_put,
after call pm_runtime_put, the cluster can power down. For CPUHP, there is same logic.
Leo's patchs already complete above power management.
https://lore.kernel.org/all/20250915-arm_coresight_power_management_fix-v3-14-ea49e91124ec@arm.com/ 
https://lore.kernel.org/all/20250915-arm_coresight_power_management_fix-v3-31-ea49e91124ec@arm.com/
> Associating a none CPU device with a bunch of CPUs does not seem
> correct. You are altering a generic coresight driver to solve a
> specific platform problem, when other solutions should be used.
> 
sometimes chip configuration will be quite different, that is there will be
a single cluseter / genpd having ALL cpus in it, but those CPUs may be powered
by different CPU rails, so check with CPU makes more sense.
>> 1. pm_runtime_sync can trigger cluster power domian power up notifier but not really
>> power up the cluster, and not able to distinguish whether it is a real power up notifier
>> or triggered by pm_runtime_sync.
>> 2. Using the power up/down notifier cannot actively wake up the cluster power,
>> which results in the components related to this cluster failing to be enabled when the cluster
>> power off.
>> 3. Using the power up/down notifier for register access does not guarantee
>> the correct path enablement sequence.
>>
> 
> Does all this not simply mean that you need to fix your power
> management drivers / configuration so that it works correctly, rather
> than create a poor workaround in unrelated drivers such as the
> coresight devices?
> 
Runtime PM for CPU devices works little different, it is mostly used to manage
hierarchical CPU topology (PSCI OSI mode) to talk with genpd framework to manage
the last CPU handling in cluster.
It doesn’t really send IPI to wakeup CPU device (It don’t have .power_on/.power_off)
callback implemented which gets invoked from .runtime_resume callback.
this part are all upstream code.
thanks,
yuanfang.
> Thanks and  Regards
> 
> 
> 
> Mike
> 
>> thanks,
>> yuanfang
>>
>>> On Tue, 28 Oct 2025 at 06:28, Yuanfang Zhang
>>> <yuanfang.zhang@....qualcomm.com> wrote:
>>>>
>>>> This patch series introduces support for CPU cluster local CoreSight components,
>>>> including funnel, replicator, and TMC, which reside inside CPU cluster
>>>> power domains. These components require special handling due to power
>>>> domain constraints.
>>>>
>>>> Unlike system-level CoreSight devices, CPU cluster local components share the
>>>> power domain of the CPU cluster. When the cluster enters low-power mode (LPM),
>>>> the registers of these components become inaccessible. Importantly, `pm_runtime_get`
>>>> calls alone are insufficient to bring the CPU cluster out of LPM, making
>>>> standard register access unreliable in such cases.
>>>>
>>>> To address this, the series introduces:
>>>> - Device tree bindings for CPU cluster local funnel, replicator, and TMC.
>>>> - Introduce a cpumask to record the CPUs belonging to the cluster where the
>>>>   cpu cluster local component resides.
>>>> - Safe register access via smp_call_function_single() on CPUs within the
>>>>   associated cpumask, ensuring the cluster is power-resident during access.
>>>> - Delayed probe support for CPU cluster local components when all CPUs of
>>>>   this CPU cluster are offline, re-probe the component when any CPU in the
>>>>   cluster comes online.
>>>> - Introduce `cs_mode` to link enable interfaces to avoid the use
>>>>   smp_call_function_single() under perf mode.
>>>>
>>>> Patch summary:
>>>> Patch 1: Adds device tree bindings for CPU cluster funnel/replicator/TMC devices.
>>>> Patches 2–3: Add support for CPU cluster funnel.
>>>> Patches 4-6: Add support for CPU cluster replicator.
>>>> Patches 7-10: Add support for CPU cluster TMC.
>>>> Patch 11: Add 'cs_mode' to link enable functions.
>>>> Patches 12-13: Add Coresight nodes for APSS debug block for x1e80100 and
>>>> fix build issue.
>>>>
>>>> Verification:
>>>>
>>>> This series has been verified on sm8750.
>>>>
>>>> Test steps for delay probe:
>>>>
>>>> 1. limit the system to enable at most 6 CPU cores during boot.
>>>> 2. echo 1 >/sys/bus/cpu/devices/cpu6/online.
>>>> 3. check whether ETM6 and ETM7 have been probed.
>>>>
>>>> Test steps for sysfs mode:
>>>>
>>>> echo 1 >/sys/bus/coresight/devices/tmc_etf0/enable_sink
>>>> echo 1 >/sys/bus/coresight/devices/etm0/enable_source
>>>> echo 1 >/sys/bus/coresight/devices/etm6/enable_source
>>>> echo 0 >/sys/bus/coresight/devices/etm0/enable_source
>>>> echo 0 >/sys/bus/coresight/devicse/etm6/enable_source
>>>> echo 0 >/sys/bus/coresight/devices/tmc_etf0/enable_sink
>>>>
>>>> echo 1 >/sys/bus/coresight/devices/tmc_etf1/enable_sink
>>>> echo 1 >/sys/bus/coresight/devcies/etm0/enable_source
>>>> cat /dev/tmc_etf1 >/tmp/etf1.bin
>>>> echo 0 >/sys/bus/coresight/devices/etm0/enable_source
>>>> echo 0 >/sys/bus/coresight/devices/tmc_etf1/enable_sink
>>>>
>>>> echo 1 >/sys/bus/coresight/devices/tmc_etf2/enable_sink
>>>> echo 1 >/sys/bus/coresight/devices/etm6/enable_source
>>>> cat /dev/tmc_etf2 >/tmp/etf2.bin
>>>> echo 0 >/sys/bus/coresight/devices/etm6/enable_source
>>>> echo 0 >/sys/bus/coresight/devices/tmc_etf2/enable_sink
>>>>
>>>> Test steps for sysfs node:
>>>>
>>>> cat /sys/bus/coresight/devices/tmc_etf*/mgmt/*
>>>>
>>>> cat /sys/bus/coresight/devices/funnel*/funnel_ctrl
>>>>
>>>> cat /sys/bus/coresight/devices/replicator*/mgmt/*
>>>>
>>>> Test steps for perf mode:
>>>>
>>>> perf record -a -e cs_etm//k -- sleep 5
>>>>
>>>> Signed-off-by: Yuanfang Zhang <yuanfang.zhang@....qualcomm.com>
>>>> ---
>>>> Yuanfang Zhang (12):
>>>>       dt-bindings: arm: coresight: Add cpu cluster tmc/funnel/replicator support
>>>>       coresight-funnel: Add support for CPU cluster funnel
>>>>       coresight-funnel: Handle delay probe for CPU cluster funnel
>>>>       coresight-replicator: Add support for CPU cluster replicator
>>>>       coresight-replicator: Handle delayed probe for CPU cluster replicator
>>>>       coresight-replicator: Update mgmt_attrs for CPU cluster replicator compatibility
>>>>       coresight-tmc: Add support for CPU cluster ETF and refactor probe flow
>>>>       coresight-tmc-etf: Refactor enable function for CPU cluster ETF support
>>>>       coresight-tmc: Update tmc_mgmt_attrs for CPU cluster TMC compatibility
>>>>       coresight-tmc: Handle delayed probe for CPU cluster TMC
>>>>       coresight: add 'cs_mode' to link enable functions
>>>>       arm64: dts: qcom: x1e80100: add Coresight nodes for APSS debug block
>>>>
>>>>  .../bindings/arm/arm,coresight-dynamic-funnel.yaml |  23 +-
>>>>  .../arm/arm,coresight-dynamic-replicator.yaml      |  22 +-
>>>>  .../devicetree/bindings/arm/arm,coresight-tmc.yaml |  22 +-
>>>>  arch/arm64/boot/dts/qcom/x1e80100.dtsi             | 885 +++++++++++++++++++++
>>>>  arch/arm64/boot/dts/qcom/x1p42100.dtsi             |  12 +
>>>>  drivers/hwtracing/coresight/coresight-core.c       |   7 +-
>>>>  drivers/hwtracing/coresight/coresight-funnel.c     | 260 +++++-
>>>>  drivers/hwtracing/coresight/coresight-replicator.c | 343 +++++++-
>>>>  drivers/hwtracing/coresight/coresight-tmc-core.c   | 396 +++++++--
>>>>  drivers/hwtracing/coresight/coresight-tmc-etf.c    | 105 ++-
>>>>  drivers/hwtracing/coresight/coresight-tmc.h        |  10 +
>>>>  drivers/hwtracing/coresight/coresight-tnoc.c       |   3 +-
>>>>  drivers/hwtracing/coresight/coresight-tpda.c       |   3 +-
>>>>  include/linux/coresight.h                          |   3 +-
>>>>  14 files changed, 1912 insertions(+), 182 deletions(-)
>>>> ---
>>>> base-commit: 01f96b812526a2c8dcd5c0e510dda37e09ec8bcd
>>>> change-id: 20251016-cpu_cluster_component_pm-ce518f510433
>>>>
>>>> Best regards,
>>>> --
>>>> Yuanfang Zhang <yuanfang.zhang@....qualcomm.com>
>>>>
>>>
>>>
>>
> 
> 
> --
> Mike Leach
> Principal Engineer, ARM Ltd.
> Manchester Design Centre. UK
Powered by blists - more mailing lists
 
