[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1710312145190.1942@nanos>
Date: Tue, 31 Oct 2017 22:32:00 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
cc: Guenter Roeck <linux@...ck-us.net>, linux-kernel@...r.kernel.org,
Don Zickus <dzickus@...hat.com>, Ingo Molnar <mingo@...nel.org>
Subject: Re: Crashes in perf_event_ctx_lock_nested
On Tue, 31 Oct 2017, Peter Zijlstra wrote:
> On Mon, Oct 30, 2017 at 03:45:12PM -0700, Guenter Roeck wrote:
> > I added some logging and a long msleep() in hardlockup_detector_perf_cleanup().
> > Here is the result:
> >
> > [ 0.274361] NMI watchdog: ############ hardlockup_detector_perf_init
> > [ 0.274915] NMI watchdog: ############ hardlockup_detector_event_create(0)
> > [ 0.277049] NMI watchdog: ############ hardlockup_detector_perf_cleanup
> > [ 0.277593] NMI watchdog: ############ hardlockup_detector_perf_enable(0)
> > [ 0.278027] NMI watchdog: ############ hardlockup_detector_event_create(0)
> > [ 1.312044] NMI watchdog: ############ hardlockup_detector_perf_cleanup done
> > [ 1.385122] NMI watchdog: ############ hardlockup_detector_perf_enable(1)
> > [ 1.386028] NMI watchdog: ############ hardlockup_detector_event_create(1)
> > [ 1.466102] NMI watchdog: ############ hardlockup_detector_perf_enable(2)
> > [ 1.475536] NMI watchdog: ############ hardlockup_detector_event_create(2)
> > [ 1.535099] NMI watchdog: ############ hardlockup_detector_perf_enable(3)
> > [ 1.535101] NMI watchdog: ############ hardlockup_detector_event_create(3)
>
> > [ 7.222816] NMI watchdog: ############ hardlockup_detector_perf_disable(0)
> > [ 7.230567] NMI watchdog: ############ hardlockup_detector_perf_disable(1)
> > [ 7.243138] NMI watchdog: ############ hardlockup_detector_perf_disable(2)
> > [ 7.250966] NMI watchdog: ############ hardlockup_detector_perf_disable(3)
> > [ 7.258826] NMI watchdog: ############ hardlockup_detector_perf_enable(1)
> > [ 7.258827] NMI watchdog: ############ hardlockup_detector_perf_cleanup
> > [ 7.258831] NMI watchdog: ############ hardlockup_detector_perf_enable(2)
> > [ 7.258833] NMI watchdog: ############ hardlockup_detector_perf_enable(0)
> > [ 7.258834] NMI watchdog: ############ hardlockup_detector_event_create(2)
> > [ 7.258835] NMI watchdog: ############ hardlockup_detector_event_create(0)
> > [ 7.260169] NMI watchdog: ############ hardlockup_detector_perf_enable(3)
> > [ 7.260170] NMI watchdog: ############ hardlockup_detector_event_create(3)
> > [ 7.494251] NMI watchdog: ############ hardlockup_detector_event_create(1)
> > [ 8.287135] NMI watchdog: ############ hardlockup_detector_perf_cleanup done
> >
> > Looks like there are a number of problems: hardlockup_detector_event_create()
> > creates the event data structure even if it is already created,
>
> Right, that does look dodgy. And on its own should be fairly straight
> forward to cure. But I'd like to understand the rest of it first.
>
> > and hardlockup_detector_perf_cleanup() runs unprotected and in
> > parallel to the enable/create functions.
We have the serialization via watchdog_mutex(). That is held around all
invocations of lockup_detector_reconfigure().
So we have the following scenarios:
boot and the proc interface:
lock(watchdog_mutex);
lockup_detector_reconfigure();
cpus_read_lock();
stop()
park threads();
update();
start()
unpark threads();
cpus_read_unlock();
cleanup();
unlock(watchdog_mutex);
cpu hotplug:
cpu_maps_update_begin()
cpus_write_lock()
cpu_up()
thread_unpark()
start()
cpus_write_unlock()
cpu_maps_update_end()
cpu hotunplug:
cpu_maps_update_begin()
cpus_write_lock()
cpu_down()
thread_park()
stop()
cpus_write_unlock()
lock(watchdog_mutex);
cleanup();
unlock(watchdog_mutex);
cpu_maps_update_end()
But there is a problem:
The unpark of the smpboot thread is not synchronous. It's just unparked so
there is no guarantee when it runs.
If it starts running _before_ the cleanup happened then it will create a
event and overwrite the dead event pointer. The new event is then cleaned
up because the event is marked dead. The park side is safe as that actually
waits for the thread to reach parked state.
That means we can have the following situation:
lock(watchdog_mutex);
lockup_detector_reconfigure();
cpus_read_lock();
stop();
park()
update();
start();
unpark()
cpus_read_unlock(); thread runs()
cleanup();
unlock(watchdog_mutex);
Duh. /me feels outright stupid.
So we have to revert
a33d44843d45 ("watchdog/hardlockup/perf: Simplify deferred event destroy")
Patch attached.
Thanks,
tglx
8<------------------
--- a/kernel/watchdog_hld.c
+++ b/kernel/watchdog_hld.c
@@ -21,6 +21,7 @@
static DEFINE_PER_CPU(bool, hard_watchdog_warn);
static DEFINE_PER_CPU(bool, watchdog_nmi_touch);
static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);
+static DEFINE_PER_CPU(struct perf_event *, dead_event);
static struct cpumask dead_events_mask;
static unsigned long hardlockup_allcpu_dumped;
@@ -203,6 +204,8 @@ void hardlockup_detector_perf_disable(vo
if (event) {
perf_event_disable(event);
+ this_cpu_write(watchdog_ev, NULL);
+ this_cpu_write(dead_event, event);
cpumask_set_cpu(smp_processor_id(), &dead_events_mask);
watchdog_cpus--;
}
@@ -218,7 +221,7 @@ void hardlockup_detector_perf_cleanup(vo
int cpu;
for_each_cpu(cpu, &dead_events_mask) {
- struct perf_event *event = per_cpu(watchdog_ev, cpu);
+ struct perf_event *event = per_cpu(dead_event, cpu);
/*
* Required because for_each_cpu() reports unconditionally
@@ -226,7 +229,7 @@ void hardlockup_detector_perf_cleanup(vo
*/
if (event)
perf_event_release_kernel(event);
- per_cpu(watchdog_ev, cpu) = NULL;
+ per_cpu(dead_event_ev, cpu) = NULL;
}
cpumask_clear(&dead_events_mask);
}
Powered by blists - more mailing lists