lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1964444.taCxCBeP46@rjwysocki.net>
Date: Wed, 16 Apr 2025 20:10:50 +0200
From: "Rafael J. Wysocki" <rjw@...ysocki.net>
To: Linux PM <linux-pm@...r.kernel.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Lukasz Luba <lukasz.luba@....com>,
 Peter Zijlstra <peterz@...radead.org>,
 Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
 Dietmar Eggemann <dietmar.eggemann@....com>,
 Morten Rasmussen <morten.rasmussen@....com>,
 Vincent Guittot <vincent.guittot@...aro.org>,
 Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>,
 Pierre Gondois <pierre.gondois@....com>,
 Christian Loehle <christian.loehle@....com>,
 Tim Chen <tim.c.chen@...ux.intel.com>
Subject:
 [RFT][PATCH v1 7/8] cpufreq: intel_pstate: Align perf domains with L2 cache

From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>

On some hybrid platforms a group of cores (referred to as a module) may
share an L2 cache in which case they also share a voltage regulator and
always run at the same frequency (while not in idle states).

For this reason, make hybrid_register_perf_domain() in the intel_pstate
driver add all CPUs sharing an L2 cache to the same perf domain for EAS.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
---

New in v1.

---
 drivers/cpufreq/intel_pstate.c |   23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -999,8 +999,11 @@
 {
 	static const struct em_data_callback cb
 			= EM_ADV_DATA_CB(hybrid_active_power, hybrid_get_cost);
+	struct cpu_cacheinfo *cacheinfo = get_cpu_cacheinfo(cpu);
+	const struct cpumask *cpumask = cpumask_of(cpu);
 	struct cpudata *cpudata = all_cpu_data[cpu];
 	struct device *cpu_dev;
+	int ret;
 
 	/*
 	 * Registering EM perf domains without enabling asymmetric CPU capacity
@@ -1014,9 +1017,25 @@
 	if (!cpu_dev)
 		return false;
 
-	if (em_dev_register_perf_domain(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
-					cpumask_of(cpu), false))
+	if (cacheinfo) {
+		unsigned int i;
+
+		/* Find the L2 cache and the CPUs sharing it. */
+		for (i = 0; i < cacheinfo->num_leaves; i++) {
+			if (cacheinfo->info_list[i].level == 2) {
+				cpumask = &cacheinfo->info_list[i].shared_cpu_map;
+				break;
+			}
+		}
+	}
+
+	ret = em_dev_register_perf_domain(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
+					  cpumask, false);
+	if (ret) {
+		cpudata->em_registered = ret == -EEXIST;
+
 		return false;
+	}
 
 	cpudata->em_registered = true;
 




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ