lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <eb9abe9d-3d12-4bf1-85da-deb38b8da321@arm.com>
Date: Thu, 23 Oct 2025 14:09:36 +0100
From: Lukasz Luba <lukasz.luba@....com>
To: Zhongqiu Han <quic_zhonhan@...cinc.com>
Cc: linux-pm@...r.kernel.org, lenb@...nel.org, christian.loehle@....com,
 amit.kucheria@...aro.org, ulf.hansson@...aro.org, james.morse@....com,
 Dave.Martin@....com, reinette.chatre@...el.com, tony.luck@...el.com,
 pavel@...nel.org, linux-kernel@...r.kernel.org, rafael@...nel.org
Subject: Re: [PATCH v2 0/5] PM QoS: Add CPU affinity latency QoS support and
 resctrl integration

Hi Zhongqui,

My apologies for being a bit late with my comments...

On 7/21/25 13:40, Zhongqiu Han wrote:
> Hi all,
> 
> This patch series introduces support for CPU affinity-based latency
> constraints in the PM QoS framework. The motivation is to allow
> finer-grained power management by enabling latency QoS requests to target
> specific CPUs, rather than applying system-wide constraints.
> 
> The current PM QoS framework supports global and per-device CPU latency
> constraints. However, in many real-world scenarios, such as IRQ affinity
> or CPU-bound kernel threads, only a subset of CPUs are
> performance-critical. Applying global constraints in such cases
> unnecessarily prevents other CPUs from entering deeper C-states, leading
> to increased power consumption.
> 
> This series addresses that limitation by introducing a new interface that
> allows latency constraints to be applied to a CPU mask. This is
> particularly useful on heterogeneous platforms (e.g., big.LITTLE) and
> embedded systems where power efficiency is critical for example:
> 
>                          driver A       rt kthread B      module C
>    CPU IDs (mask):         0-3              2-5              6-7
>    target latency(us):     20               30               100
>                            |                |                |
>                            v                v                v
>                            +---------------------------------+
>                            |        PM  QoS  Framework       |
>                            +---------------------------------+
>                            |                |                |
>                            v                v                v
>    CPU IDs (mask):        0-3            2-3,4-5            6-7
>    runtime latency(us):   20             20, 30             100
> 
> The current implementation includes only cpu_affinity_latency_qos_add()
> and cpu_affinity_latency_qos_remove() interfaces. An update interface is
> planned for future submission, along with PM QoS optimizations in the UFS
> subsystem.
> 
> Patch1 introduces the core support for CPU affinity latency QoS in the PM
> QoS framework.
> 
> Patch2 removes redundant KERN_ERR prefixes in WARN() calls in the global
> CPU PM QoS interface. This change addresses issues in existing code and is
> not related to the new interface introduced in this patch series.
> 
> Patch3 adds documentation for the new interface.
> 
> Patch4 fixes a minor documentation issue related to the return type of
> cpu_latency_qos_request_active(). This change addresses issues in existing
> doc and is not related to the new interface introduced in this patch
> series.
> 
> Patch5 updates the resctrl pseudo-locking logic to use the new CPU
> affinity latency QoS helpers, improving clarity and consistency. The only
> functional and beneficial change is that the new interface actively wakes
> up CPUs whose latency QoS values have changed, ensuring the latency limit
> takes effect immediately.

Could you describe a bit more the big picture of this proposed design,
please?

Ideally with some graph of connected frameworks & drivers and how they
are going to work together.

E.g.:
1. what are the other components in the kernel which would use this
feature?
2. is there also a user-space interface planned for it so a HAL in
the middle-ware would configure these "short-wake-up-CPU"?
3. Is it possible to view/debug from the user-space which component
requested this setting for some subsets of cpus?

Regards,
Lukasz


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ