[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240729105504.2170-1-Jonathan.Cameron@huawei.com>
Date: Mon, 29 Jul 2024 11:55:04 +0100
From: Jonathan Cameron <Jonathan.Cameron@...wei.com>
To: Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>,
<rafael.j.wysocki@...el.com>, <catalin.marinas@....com>, Ingo Molnar
<mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Dave Hansen
<dave.hansen@...ux.intel.com>, <x86@...nel.org>, "H . Peter Anvin"
<hpa@...or.com>, <Terry.bowman@....com>
CC: <linuxarm@...wei.com>, <guohanjun@...wei.com>, <gshan@...hat.com>,
<miguel.luis@...cle.com>, Linux List Kernel Mailing
<linux-kernel@...r.kernel.org>, Linux regressions mailing list
<regressions@...ts.linux.dev>, <shameerali.kolothum.thodi@...wei.com>
Subject: [PATCH] x86/aperfmperf: Fix deadlock on cpu_hotplug_lock
The broken patch results in a call to init_freq_invariance_cppc() in a CPU
hotplug handler in both the path for initially present CPUs and those
hotplugged later. That function includes a one time call to
amd_set_max_freq_ratio() which in turn calls freq_invariance_enable() that
has a static_branch_enable() which takes the cpu_hotlug_lock which is
already held.
Avoid the deadlock by using static_branch_enable_cpuslocked() as the lock
will always be already held. The equivalent path on Intel does not
already hold this lock, so take it around the call to
freq_invariance_enable(), which results in it being held over the call to
register_syscall_ops, which looks to be safe to do.
Fixes: c1385c1f0ba3 ("ACPI: processor: Simplify initial onlining to use same path for cold and hotplug")
Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>
Closes: https://lore.kernel.org/all/CABXGCsPvqBfL5hQDOARwfqasLRJ_eNPBbCngZ257HOe=xbWDkA@mail.gmail.com/
Suggested-by: Thomas Gleixner <tglx@...utronix.de>
Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@...wei.com>
---
arch/x86/kernel/cpu/aperfmperf.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/aperfmperf.c b/arch/x86/kernel/cpu/aperfmperf.c
index b3fa61d45352..0b69bfbf345d 100644
--- a/arch/x86/kernel/cpu/aperfmperf.c
+++ b/arch/x86/kernel/cpu/aperfmperf.c
@@ -306,7 +306,7 @@ static void freq_invariance_enable(void)
WARN_ON_ONCE(1);
return;
}
- static_branch_enable(&arch_scale_freq_key);
+ static_branch_enable_cpuslocked(&arch_scale_freq_key);
register_freq_invariance_syscore_ops();
pr_info("Estimated ratio of average max frequency by base frequency (times 1024): %llu\n", arch_max_freq_ratio);
}
@@ -323,8 +323,10 @@ static void __init bp_init_freq_invariance(void)
if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
return;
- if (intel_set_max_freq_ratio())
+ if (intel_set_max_freq_ratio()) {
+ guard(cpus_read_lock)();
freq_invariance_enable();
+ }
}
static void disable_freq_invariance_workfn(struct work_struct *work)
--
2.43.0
Powered by blists - more mailing lists