lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251204180914.1855553-1-ak@linux.intel.com>
Date: Thu,  4 Dec 2025 10:09:14 -0800
From: Andi Kleen <ak@...ux.intel.com>
To: linux-kernel@...r.kernel.org
Cc: x86@...nel.org,
	Andi Kleen <ak@...ux.intel.com>,
	ggherdovich@...e.cz,
	Peter Zijlstra <peterz@...radead.org>,
	rafael.j.wysocki@...el.com
Subject: [PATCH] x86/aperfmperf: Don't disable scheduler APERF/MPERF on bad samples

The APERF and MPERF MSRs get read together and the ratio
between the two is used to scale the scheduler capacity with frequency.

Since e2b0d619b400 when there is ever an over/underflow of
the APERF/MPERF computation the sampling gets completely
disabled, under the assumption that there is a problem with
the hardware.

However this can happen without any malfunction when there is
a long enough interruption between the two MSR reads, for
example due to an unlucky NMI or SMI or other system event
causing delays. We saw it when a delay resulted in
Acnt_Delta << Mcnt_Delta (about ~4k for acnt_delta and
2M for MCnt_Delta)

In this case the ratio computation underflows, which is detected,
but then APERF/MPERF usage gets incorrectly disabled forever.

Remove the code to completely disable APERF/MPERF on
a bad sample. Instead when any over/underflow happens
return the fallback full capacity.

In theory could have a threshold to disable, but since
delays could happen randomly it's unclear what a good
threshold would be. If the hardware is truly broken
this will result in using a few more cycles to read
the bogus samples, but they will be all still rejected.

Cc: ggherdovich@...e.cz
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: rafael.j.wysocki@...el.com
Fixes: e2b0d619b400 ("x86, sched: check for counters overflow ...")
Signed-off-by: Andi Kleen <ak@...ux.intel.com>
---
 arch/x86/kernel/cpu/aperfmperf.c | 36 ++++++++++----------------------
 1 file changed, 11 insertions(+), 25 deletions(-)

diff --git a/arch/x86/kernel/cpu/aperfmperf.c b/arch/x86/kernel/cpu/aperfmperf.c
index a315b0627dfb..7f4210e1082b 100644
--- a/arch/x86/kernel/cpu/aperfmperf.c
+++ b/arch/x86/kernel/cpu/aperfmperf.c
@@ -330,23 +330,6 @@ static void __init bp_init_freq_invariance(void)
 	}
 }
 
-static void disable_freq_invariance_workfn(struct work_struct *work)
-{
-	int cpu;
-
-	static_branch_disable(&arch_scale_freq_key);
-
-	/*
-	 * Set arch_freq_scale to a default value on all cpus
-	 * This negates the effect of scaling
-	 */
-	for_each_possible_cpu(cpu)
-		per_cpu(arch_freq_scale, cpu) = SCHED_CAPACITY_SCALE;
-}
-
-static DECLARE_WORK(disable_freq_invariance_work,
-		    disable_freq_invariance_workfn);
-
 DEFINE_PER_CPU(unsigned long, arch_freq_scale) = SCHED_CAPACITY_SCALE;
 EXPORT_PER_CPU_SYMBOL_GPL(arch_freq_scale);
 
@@ -437,30 +420,33 @@ static void scale_freq_tick(u64 acnt, u64 mcnt)
 	if (!arch_scale_freq_invariant())
 		return;
 
+	/*
+	 * On any over/underflow just ignore the sample. It could
+	 * be due to an unlucky NMI or similar between the
+	 * APERF and MPERF reads.
+	 */
 	if (check_shl_overflow(acnt, 2*SCHED_CAPACITY_SHIFT, &acnt))
-		goto error;
+		goto out;
 
 	if (static_branch_unlikely(&arch_hybrid_cap_scale_key))
 		freq_ratio = READ_ONCE(this_cpu_ptr(arch_cpu_scale)->freq_ratio);
 	else
 		freq_ratio = arch_max_freq_ratio;
 
+	freq_scale = SCHED_CAPACITY_SCALE;
+
 	if (check_mul_overflow(mcnt, freq_ratio, &mcnt) || !mcnt)
-		goto error;
+		goto out;
 
 	freq_scale = div64_u64(acnt, mcnt);
 	if (!freq_scale)
-		goto error;
+		goto out;
 
 	if (freq_scale > SCHED_CAPACITY_SCALE)
 		freq_scale = SCHED_CAPACITY_SCALE;
 
+out:
 	this_cpu_write(arch_freq_scale, freq_scale);
-	return;
-
-error:
-	pr_warn("Scheduler frequency invariance went wobbly, disabling!\n");
-	schedule_work(&disable_freq_invariance_work);
 }
 #else
 static inline void bp_init_freq_invariance(void) { }
-- 
2.51.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ