[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <165106776486.4207.10542512805446730601.tip-bot2@tip-bot2>
Date: Wed, 27 Apr 2022 13:56:04 -0000
From: "tip-bot2 for Thomas Gleixner" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: "Rafael J. Wysocki" <rafael@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Doug Smythies <dsmythies@...us.net>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: x86/cleanups] x86/aperfmperf: Integrate the fallback code from
show_cpuinfo()
The following commit has been merged into the x86/cleanups branch of tip:
Commit-ID: e696cabf5da2b4ed104508674de6125b860f3c9f
Gitweb: https://git.kernel.org/tip/e696cabf5da2b4ed104508674de6125b860f3c9f
Author: Thomas Gleixner <tglx@...utronix.de>
AuthorDate: Mon, 25 Apr 2022 17:45:42 +02:00
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitterDate: Wed, 27 Apr 2022 15:51:09 +02:00
x86/aperfmperf: Integrate the fallback code from show_cpuinfo()
Due to the avoidance of IPIs to idle CPUs arch_freq_get_on_cpu() can return
0 when the last sample was too long ago.
show_cpuinfo() has a fallback to cpufreq_quick_get() and if that fails to
return cpu_khz, but the readout code for the per CPU scaling frequency in
sysfs does not.
Move that fallback into arch_freq_get_on_cpu() so the behaviour is the same
when reading /proc/cpuinfo and /sys/..../cur_scaling_freq.
Suggested-by: "Rafael J. Wysocki" <rafael@...nel.org>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Tested-by: Doug Smythies <dsmythies@...us.net>
Link: https://lore.kernel.org/r/87pml5180p.ffs@tglx
---
arch/x86/kernel/cpu/aperfmperf.c | 10 +++++++---
arch/x86/kernel/cpu/proc.c | 7 +------
2 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kernel/cpu/aperfmperf.c b/arch/x86/kernel/cpu/aperfmperf.c
index b15c884..1f60a2b 100644
--- a/arch/x86/kernel/cpu/aperfmperf.c
+++ b/arch/x86/kernel/cpu/aperfmperf.c
@@ -405,12 +405,12 @@ void arch_scale_freq_tick(void)
unsigned int arch_freq_get_on_cpu(int cpu)
{
struct aperfmperf *s = per_cpu_ptr(&cpu_samples, cpu);
+ unsigned int seq, freq;
unsigned long last;
- unsigned int seq;
u64 acnt, mcnt;
if (!cpu_feature_enabled(X86_FEATURE_APERFMPERF))
- return 0;
+ goto fallback;
do {
seq = raw_read_seqcount_begin(&s->seq);
@@ -424,9 +424,13 @@ unsigned int arch_freq_get_on_cpu(int cpu)
* which covers idle and NOHZ full CPUs.
*/
if (!mcnt || (jiffies - last) > MAX_SAMPLE_AGE)
- return 0;
+ goto fallback;
return div64_u64((cpu_khz * acnt), mcnt);
+
+fallback:
+ freq = cpufreq_quick_get(cpu);
+ return freq ? freq : cpu_khz;
}
static int __init bp_init_aperfmperf(void)
diff --git a/arch/x86/kernel/cpu/proc.c b/arch/x86/kernel/cpu/proc.c
index 0a0ee55..099b6f0 100644
--- a/arch/x86/kernel/cpu/proc.c
+++ b/arch/x86/kernel/cpu/proc.c
@@ -86,12 +86,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
if (cpu_has(c, X86_FEATURE_TSC)) {
unsigned int freq = arch_freq_get_on_cpu(cpu);
- if (!freq)
- freq = cpufreq_quick_get(cpu);
- if (!freq)
- freq = cpu_khz;
- seq_printf(m, "cpu MHz\t\t: %u.%03u\n",
- freq / 1000, (freq % 1000));
+ seq_printf(m, "cpu MHz\t\t: %u.%03u\n", freq / 1000, (freq % 1000));
}
/* Cache size */
Powered by blists - more mailing lists