[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201221162249.3119-1-lecopzer.chen@mediatek.com>
Date: Tue, 22 Dec 2020 00:22:49 +0800
From: Lecopzer Chen <lecopzer.chen@...iatek.com>
To: <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>
CC: <lecopzer.chen@...iatek.com>, <alexandru.elisei@....com>,
<linux-mediatek@...ts.infradead.org>, <matthias.bgg@...il.com>,
<oliver.sang@...el.com>, <sumit.garg@...aro.org>,
<will@...nel.org>, <yj.chiang@...iatek.com>,
<peterz@...radead.org>, <mingo@...hat.com>, <acme@...nel.org>,
<mark.rutland@....com>, <alexander.shishkin@...ux.intel.com>,
<jolsa@...hat.com>, <namhyung@...nel.org>,
<catalin.marinas@....com>
Subject: [PATCH v2] arm64: perf: Fix access percpu variables in preemptible context
commit 367c820ef08082 ("arm64: Enable perf events based hard lockup detector")
reinitilizes lockup detector after arm64 PMU is initialized and open
a window for accessing smp_processor_id() in preemptible context.
Since hardlockup_detector_perf_init() always called in init stage
with a single cpu, but we initialize lockup detector after the init task
is migratable.
Fix this by utilizing lockup detector reconfiguration which calls
softlockup_start_all() on each cpu and calls watatchdog_nmi_enable() later.
Because softlockup_start_all() use IPI call function to make sure
watatchdog_nmi_enable() will bind on each cpu and fix this issue.
BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
caller is debug_smp_processor_id+0x20/0x2c
CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.10.0+ #276
Hardware name: linux,dummy-virt (DT)
Call trace:
dump_backtrace+0x0/0x3c0
show_stack+0x20/0x6c
dump_stack+0x2f0/0x42c
check_preemption_disabled+0x1cc/0x1dc
debug_smp_processor_id+0x20/0x2c
hardlockup_detector_event_create+0x34/0x18c
hardlockup_detector_perf_init+0x2c/0x134
watchdog_nmi_probe+0x18/0x24
lockup_detector_init+0x44/0xa8
armv8_pmu_driver_init+0x54/0x78
do_one_initcall+0x184/0x43c
kernel_init_freeable+0x368/0x380
kernel_init+0x1c/0x1cc
ret_from_fork+0x10/0x30
Fixes: 367c820ef08082 ("arm64: Enable perf events based hard lockup detector")
Signed-off-by: Lecopzer Chen <lecopzer.chen@...iatek.com>
Reported-by: kernel test robot <oliver.sang@...el.com>
Cc: Sumit Garg <sumit.garg@...aro.org>
---
Changelog v1 -> v2:
* https://lore.kernel.org/lkml/20201217130617.32202-1-lecopzer.chen@mediatek.com/
* Move solution from kernel/watchdog_hld.c to arm64 perf_event
* avoid preemptive kmalloc in preempt_disable().
arch/arm64/kernel/perf_event.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 38bb07eff872..c03e21210bbb 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -1345,4 +1345,20 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh)
return (u64)max_cpu_freq * watchdog_thresh;
}
+
+/*
+ * hardlockup_detector_perf_init() always call in init stage with a single
+ * cpu. In arm64 case, we re-initialize lockup detector after pmu driver
+ * initialized. Lockup detector initial function use lots of percpu variables
+ * and this makes CONFIG_DEBUG_PREEMPT unhappy because we are now in
+ * preemptive context.
+ * Return 0 if the nmi is ready and register nmi hardlockup detector by
+ * lockup detector reconfiguration.
+ */
+int __init watchdog_nmi_probe(void)
+{
+ if (arm_pmu_irq_is_nmi())
+ return 0;
+ return -ENODEV;
+}
#endif
--
2.25.1
Powered by blists - more mailing lists