lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <927f15d5-da2a-4282-b80f-c1c7563a4367@oss.qualcomm.com>
Date: Fri, 19 Dec 2025 09:50:18 +0800
From: yuanfang zhang <yuanfang.zhang@....qualcomm.com>
To: Leo Yan <leo.yan@....com>
Cc: Suzuki K Poulose <suzuki.poulose@....com>,
        Mike Leach <mike.leach@...aro.org>,
        James Clark <james.clark@...aro.org>, Rob Herring <robh@...nel.org>,
        Krzysztof Kozlowski <krzk+dt@...nel.org>,
        Conor Dooley <conor+dt@...nel.org>,
        Mathieu Poirier <mathieu.poirier@...aro.org>,
        Leo Yan <leo.yan@...ux.dev>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Bjorn Andersson <andersson@...nel.org>,
        Konrad Dybcio <konradybcio@...nel.org>, kernel@....qualcomm.com,
        coresight@...ts.linaro.org, linux-arm-kernel@...ts.infradead.org,
        devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-arm-msm@...r.kernel.org, maulik.shah@....qualcomm.com,
        Jie Gan <jie.gan@....qualcomm.com>
Subject: Re: [PATCH v2 00/12] coresight: Add CPU cluster funnel/replicator/tmc
 support



On 12/18/2025 6:40 PM, Leo Yan wrote:
> Hi,
> 
> On Thu, Dec 18, 2025 at 12:09:40AM -0800, Coresight ML wrote:
> 
> [...]
> 
>> - Utilizing `smp_call_function_single()` to ensure register accesses
>>   (initialization, enablement, sysfs reads) are always executed on a
>>   powered CPU within the target cluster.
> 
> This is concerned as Mike suggested earlier.
> 
> Let me convert to a common question: how does the Linux kernel manage
> a power domain shared by multiple hardware modules?
> 
> A general solution is to bind a power domain (let's say PD1) to both
> module A (MOD_A) and module B (MOD_B).  Each time before accessing MOD_A
> or MOD_B, PD1 must be powered on first via the pm_runtime APIs, with
> its refcount increased accordingly.
> 
> My understanding is the problem in your case is that the driver fails to
> create a relationship between the funnel/replicator modules and the
> cluster power domain.  Instead, you are trying to use the CPUs in the
> same cluster as a delegate for power operations - when you want to
> access MOD_B, your wake up MOD_A which sharing the same power domain,
> only to turn on the PD_A in order to access MOD_B.
> 
> Have you discussed with the firmware and hardware engineers whether it
> is feasible to provide explicit power and clock control interfaces for
> the funnel and replicator modules?  I can imagine the cluster power
> domain's design might differ from other device power domains, but
> should not the hardware provide a sane design that allows software to
> control power for the access logic within it?
> 

It is due to the particular characteristics of the CPU cluster power domain.
Runtime PM for CPU devices works little different, it is mostly used to manage hierarchical
CPU topology (PSCI OSI mode) to talk with genpd framework to manage the last CPU handling in
cluster.
It doesn’t really send IPI to wakeup CPU device (It don’t have .power_on/.power_off) callback
implemented which gets invoked from .runtime_resume callback. This behavior is aligned with
the upstream Kernel.


> General speaking, using smp_call_function_single() makes sense if only
> when accessing logics within the CPU boundary.
> 
> P.s., currently you can use "taskset" as a temporary solution without
> any code change, something like:
> 
>   taskset -c 0 echo 1 > /sys/bus/coresight/devices/etm0/enable_source


This can address the runtime issue, but it does not resolve the problem during the probe phase.

thanks,
Yuanfang> 
> Thanks,
> Leo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ