lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 26 Jun 2023 12:36:01 -0700
From:   Keyon Jie <yang.jie@...ux.intel.com>
To:     Thomas Gleixner <tglx@...utronix.de>, x86@...nel.org,
        linux-kernel@...r.kernel.org
Cc:     Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        "H . Peter Anvin" <hpa@...or.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Yair Podemsky <ypodemsk@...hat.com>,
        Keyon Jie <yang.jie@...ux.intel.com>
Subject: [PATCH] x86/aperfmperf: Fix the fallback condition in arch_freq_get_on_cpu()

>From the commit f3eca381bd49 on, the fallback condition about the 'the
last update was too long' have been comparing ticks and milliseconds by
mistake, which leads to that the condition is met and the fallback
method is used frequently.

The change to compare ticks here corrects that and fixes related issues
have been seen on x86 platforms since 5.18 kernel.

Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217597
Fixes: f3eca381bd49 ("x86/aperfmperf: Replace arch_freq_get_on_cpu()")
CC: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Keyon Jie <yang.jie@...ux.intel.com>
---
 arch/x86/kernel/cpu/aperfmperf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/aperfmperf.c b/arch/x86/kernel/cpu/aperfmperf.c
index fdbb5f07448f..24e24e137226 100644
--- a/arch/x86/kernel/cpu/aperfmperf.c
+++ b/arch/x86/kernel/cpu/aperfmperf.c
@@ -432,7 +432,7 @@ unsigned int arch_freq_get_on_cpu(int cpu)
 	 * Bail on invalid count and when the last update was too long ago,
 	 * which covers idle and NOHZ full CPUs.
 	 */
-	if (!mcnt || (jiffies - last) > MAX_SAMPLE_AGE)
+	if (!mcnt || (jiffies - last) > MAX_SAMPLE_AGE * cpu_khz)
 		goto fallback;
 
 	return div64_u64((cpu_khz * acnt), mcnt);
-- 
2.34.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ