[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <161107259553.414.4642815387021919631.tip-bot2@tip-bot2>
Date: Tue, 19 Jan 2021 16:09:55 -0000
From: "tip-bot2 for Rafael J. Wysocki" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Giovanni Gherdovich <ggherdovich@...e.cz>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/urgent] x86: PM: Register syscore_ops for scale invariance
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 9c7d9017a49fb8516c13b7bff59b7da2abed23e1
Gitweb: https://git.kernel.org/tip/9c7d9017a49fb8516c13b7bff59b7da2abed23e1
Author: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
AuthorDate: Fri, 08 Jan 2021 19:05:59 +01:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Tue, 19 Jan 2021 17:04:03 +01:00
x86: PM: Register syscore_ops for scale invariance
On x86 scale invariace tends to be disabled during resume from
suspend-to-RAM, because the MPERF or APERF MSR values are not as
expected then due to updates taking place after the platform
firmware has been invoked to complete the suspend transition.
That, of course, is not desirable, especially if the schedutil
scaling governor is in use, because the lack of scale invariance
causes it to be less reliable.
To counter that effect, modify init_freq_invariance() to register
a syscore_ops object for scale invariance with the ->resume callback
pointing to init_counter_refs() which will run on the CPU starting
the resume transition (the other CPUs will be taken care of the
"online" operations taking place later).
Fixes: e2b0d619b400 ("x86, sched: check for counters overflow in frequency invariant accounting")
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Giovanni Gherdovich <ggherdovich@...e.cz>
Link: https://lkml.kernel.org/r/1803209.Mvru99baaF@kreacher
---
arch/x86/kernel/smpboot.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 8ca66af..117e24f 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -56,6 +56,7 @@
#include <linux/numa.h>
#include <linux/pgtable.h>
#include <linux/overflow.h>
+#include <linux/syscore_ops.h>
#include <asm/acpi.h>
#include <asm/desc.h>
@@ -2083,6 +2084,23 @@ static void init_counter_refs(void)
this_cpu_write(arch_prev_mperf, mperf);
}
+#ifdef CONFIG_PM_SLEEP
+static struct syscore_ops freq_invariance_syscore_ops = {
+ .resume = init_counter_refs,
+};
+
+static void register_freq_invariance_syscore_ops(void)
+{
+ /* Bail out if registered already. */
+ if (freq_invariance_syscore_ops.node.prev)
+ return;
+
+ register_syscore_ops(&freq_invariance_syscore_ops);
+}
+#else
+static inline void register_freq_invariance_syscore_ops(void) {}
+#endif
+
static void init_freq_invariance(bool secondary, bool cppc_ready)
{
bool ret = false;
@@ -2109,6 +2127,7 @@ static void init_freq_invariance(bool secondary, bool cppc_ready)
if (ret) {
init_counter_refs();
static_branch_enable(&arch_scale_freq_key);
+ register_freq_invariance_syscore_ops();
pr_info("Estimated ratio of average max frequency by base frequency (times 1024): %llu\n", arch_max_freq_ratio);
} else {
pr_debug("Couldn't determine max cpu frequency, necessary for scale-invariant accounting.\n");
Powered by blists - more mailing lists