[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160115211125.GA3818@linux.vnet.ibm.com>
Date: Fri, 15 Jan 2016 13:11:25 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Sasha Levin <sasha.levin@...cle.com>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: timers: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
On Fri, Jan 15, 2016 at 11:03:24AM +0100, Thomas Gleixner wrote:
> On Thu, 14 Jan 2016, Paul E. McKenney wrote:
> > > Untested patch below.
> >
> > One small fix to make it build below. Started rcutorture, somewhat
> > pointlessly given that the splat doesn't appear on my setup.
>
> Well, at least it tells us whether the change explodes by itself.
Hmmm...
So this is a strange one. I have been seeing increasing instability
in mainline over the past couple of releases, with the main symptom
being that the kernel decides that awakening RCU's grace-period kthreads
is an optional activity. The usual situation is that the kthread is
blocked for tens of seconds in an wait_event_interruptible_timeout(),
despite having a three-jiffy timeout. Doing periodic wakeups from
the scheduling-clock interrupt seems to clear things up, but such hacks
should not be necessary.
Normally, I have to run for for some hours to have a good chance of seeing
this happen. This change triggered in a 30-minute run. Not only that,
but in a .config scenario that is normally very hard to trigger. This
scenario does involve CPU hotplug, and I am re-running with CPU hotplug
disabled.
That said, I am starting to hear reports of people hitting this without
CPU hotplug operations...
Thanx, Paul
Powered by blists - more mailing lists