[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1569453525-41874-1-git-send-email-decui@microsoft.com>
Date: Wed, 25 Sep 2019 23:18:59 +0000
From: Dexuan Cui <decui@...rosoft.com>
To: KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"sashal@...nel.org" <sashal@...nel.org>,
"daniel.lezcano@...aro.org" <daniel.lezcano@...aro.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Michael Kelley <mikelley@...rosoft.com>
CC: Dexuan Cui <decui@...rosoft.com>
Subject: [PATCH][RESEND] clocksource/drivers: hyperv_timer: Fix CPU offlining
by unbinding the timer
The commit fd1fea6834d0 says "No behavior is changed", but actually it
removes the clockevents_unbind_device() call from hv_synic_cleanup().
In the discussion earlier this month, I thought the unbind call is
unnecessary (see https://www.spinics.net/lists/arm-kernel/msg739888.html),
however, after more investigation, when a VM runs on Hyper-V, it turns out
the unbind call must be kept, otherwise CPU offling may not work, because
a per-cpu timer device is still needed, after hv_synic_cleanup() disables
the per-cpu Hyper-V timer device.
The issue is found in the hibernation test. These are the details:
1. CPU0 hangs in wait_for_ap_thread(), when trying to offline CPU1:
hibernation_snapshot
create_image
suspend_disable_secondary_cpus
freeze_secondary_cpus
_cpu_down(1, 1, CPUHP_OFFLINE)
cpuhp_kick_ap_work
cpuhp_kick_ap
__cpuhp_kick_ap
wait_for_ap_thread()
2. CPU0 hangs because CPU1 hangs this way: after CPU1 disables the per-cpu
Hyper-V timer device in hv_synic_cleanup(), CPU1 sets a timer... Please
read on to see how this can happen.
2.1 By "_cpu_down(1, 1, CPUHP_OFFLINE):", CPU0 first tries to move CPU1 to
the CPUHP_TEARDOWN_CPU state and this wakes up the cpuhp/1 thread on CPU1;
the thread is basically a loop of executing various callbacks defined in
the global array cpuhp_hp_states[]: see smpboot_thread_fn().
2.2 This is how a callback is called on CPU1:
smpboot_thread_fn
ht->thread_fn(td->cpu), i.e. cpuhp_thread_fun
cpuhp_invoke_callback
state = st->state
st->state--
cpuhp_get_step(state)->teardown.single()
2.3 At first, the state of CPU1 is CPUHP_ONLINE, which defines a
.teardown.single of NULL, so the execution of the code returns to the loop
in smpboot_thread_fn(), and then reruns cpuhp_invoke_callback() with a
smaller st->state.
2.4 The .teardown.single of every state between CPUHP_ONLINE and
CPUHP_TEARDOWN_CPU runs one by one.
2.5 When it comes to the CPUHP_AP_ONLINE_DYN range, hv_synic_cleanup()
runs: see vmbus_bus_init(). It calls hv_stimer_cleanup() ->
hv_ce_shutdown() to disable the per-cpu timer device, so timer interrupt
will no longer happen on CPU1.
2.6 Later, the .teardown.single of CPUHP_AP_SMPBOOT_THREADS, i.e.
smpboot_park_threads(), starts to run, trying to park all the other
hotplug_threads, e.g. ksoftirqd/1 and rcuc/1; here a timer can be set up
this way and the timer will never be fired since CPU1 doesn't have
an active timer device now, so CPU1 hangs and can not be offlined:
smpboot_park_threads
smpboot_park_thread
kthread_park
wait_task_inactive
schedule_hrtimeout(&to, HRTIMER_MODE_REL)
With this patch, when the per-cpu Hyper-V timer device is disabled, the
system switches to the Local APIC timer, and the hang issue can not
happen.
Fixes: fd1fea6834d0 ("clocksource/drivers: Make Hyper-V clocksource ISA agnostic")
Signed-off-by: Dexuan Cui <decui@...rosoft.com>
---
The patch was firstly posted on Jul 27: https://lkml.org/lkml/2019/7/27/5
There is no change since then.
drivers/clocksource/hyperv_timer.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c
index ba2c79e6a0ee..17b96f9ed0c9 100644
--- a/drivers/clocksource/hyperv_timer.c
+++ b/drivers/clocksource/hyperv_timer.c
@@ -139,6 +139,7 @@ void hv_stimer_cleanup(unsigned int cpu)
/* Turn off clockevent device */
if (ms_hyperv.features & HV_MSR_SYNTIMER_AVAILABLE) {
ce = per_cpu_ptr(hv_clock_event, cpu);
+ clockevents_unbind_device(ce, cpu);
hv_ce_shutdown(ce);
}
}
--
2.19.1
Powered by blists - more mailing lists