[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4908113.GXAFRqVoOG@rjwysocki.net>
Date: Fri, 02 Aug 2024 20:15:10 +0200
From: "Rafael J. Wysocki" <rjw@...ysocki.net>
To: x86 Maintainers <x86@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Linux PM <linux-pm@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>, Peter Zijlstra <peterz@...radead.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ricardo Neri <ricardo.neri@...el.com>, Tim Chen <tim.c.chen@...el.com>
Subject:
[PATCH v1 0/3] x86 / intel_pstate: Set asymmetric CPU capacity on hybrid
systems
Hi Everyone,
The purpose of this series is to provide the scheduler with asymmetric CPU
capacity information on x86 hybrid systems based on Intel hardware.
The asymmetric CPU capacity information is important on hybrid systems as it
allows utilization to be computed for tasks in a consistent way across all
CPUs in the system, regardless of their capacity. This, in turn, allows
the schedutil cpufreq governor to set CPU performance levels consistently
in the cases when tasks migrate between CPUs of different capacities. It
should also help to improve task placement and load balancing decisions on
hybrid systems and it is key for anything along the lines of EAS.
The information in question comes from the MSR_HWP_CAPABILITIES register and
is provided to the scheduler by the intel_pstate driver, as per the changelog
of patch [3/3]. Patch [2/3] introduces the arch infrastructure needed for
that (in the form of a per-CPU capacity variable) and patch [1/3] is a
preliminary code adjustment.
This is based on an RFC posted previously
https://lore.kernel.org/linux-pm/7663799.EvYhyI6sBW@kreacher/
but differs from it quite a bit (except for the first patch). The most
significant difference is based on the observation that frequency-
invariance needs to adjusted to the capacity scaling on hybrid systems
for the complete scale-invariance to work as expected.
Thank you!
Powered by blists - more mailing lists