[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180128005317.GA31914@linux.vnet.ibm.com>
Date: Sat, 27 Jan 2018 16:53:17 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
Sebastian Sewior <bigeasy@...utronix.de>,
Anna-Maria Gleixner <anna-maria@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH] hrtimer: Reset hrtimer cpu base proper on CPU hotplug
On Fri, Jan 26, 2018 at 02:09:17PM -0800, Paul E. McKenney wrote:
> On Fri, Jan 26, 2018 at 02:54:32PM +0100, Thomas Gleixner wrote:
> > The hrtimer interrupt code contains a hang detection and mitigation
> > mechanism, which prevents that a long delayed hrtimer interrupt causes a
> > continous retriggering of interrupts which prevent the system from making
> > progress. If a hang is detected then the timer hardware is programmed with
> > a certain delay into the future and a flag is set in the hrtimer cpu base
> > which prevents newly enqueued timers from reprogramming the timer hardware
> > prior to the chosen delay. The subsequent hrtimer interrupt after the delay
> > clears the flag and resumes normal operation.
> >
> > If such a hang happens in the last hrtimer interrupt before a CPU is
> > unplugged then the hang_detected flag is set and stays that way when the
> > CPU is plugged in again. At that point the timer hardware is not armed and
> > it cannot be armed because the hang_detected flag is still active, so
> > nothing clears that flag. As a consequence the CPU does not receive hrtimer
> > interrupts and no timers expire on that CPU which results in RCU stalls and
> > other malfunctions.
> >
> > Clear the flag along with some other less critical members of the hrtimer
> > cpu base to ensure starting from a clean state when a CPU is plugged in.
> >
> > Thanks to Paul, Sebastian and Anna-Maria for their help to get down to the
> > root cause of that hard to reproduce heisenbug. Once understood it's
> > trivial and certainly justifies a brown paperbag.
>
> Thank you very much, and I do know that feeling! After reading the
> commit log, I feel significantly less incompetent for having failed to
> find this one. ;-) But it did pass rcutorture testing for a great many
> years, didn't it? :-/
>
> I have started an eight-hour seven-way test on the dreaded rcutorture
> TREE01 scenario. In the meantime, off to the train!
And bozo here forgot to disable tracing, so the runs take much longer
than the stated time. And because I applied against v4.15-rc9, which
lacks the recent code to suppress stalls when dumping the trace log,
all runs get RCU CPU stall warnings. :-/
But I can filter out those tracing-induced stall warnings easily enough,
and thus far there have been 84 successful 30-minute runs out of 112
total, for no failures in 42 hours of TREE01 execution. Given the base
failure rate of 0.33 per hour, the probability of this happening by
chance is something like ten to the minus sixth power, so:
Tested-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
I will be running longer runs without tracing, but looking extremely
good thus far! Thank you all very much!!!
Thanx, Paul
Powered by blists - more mailing lists