[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54F7D974.8030208@linux.vnet.ibm.com>
Date: Thu, 05 Mar 2015 09:50:04 +0530
From: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Nicolas Pitre <nicolas.pitre@...aro.org>, tglx@...utronix.de,
linux-kernel@...r.kernel.org, mingo@...nel.org, rjw@...ysocki.net,
Michael Ellerman <mpe@...erman.id.au>
Subject: Re: [PATCH 32/35] clockevents: Fix cpu down race for hrtimer based
broadcasting
On 03/02/2015 08:26 PM, Peter Zijlstra wrote:
> On Fri, Feb 27, 2015 at 02:19:05PM +0530, Preeti U Murthy wrote:
>> The problem reported in the changelog of this patch is causing severe
>> regressions very frequently on our machines for certain usecases. It would
>> help to put in a fix in place first and then follow that up with these
>> cleanups. A fix on the below lines :
>
> Regression how? Neither Thomas' Changelog, nor yours mention its a
> regression.
>
> If its a (recent) Regression you need to have a Fixes tag at the very
> least. So when was this broken and by which patch?
>
It was found recently when doing a hotplug stress test on POWER, that
the machine hits lockups spewing
NMI watchdog: BUG: soft lockup - CPU#20 stuck for 23s! [swapper/20:0]
or
INFO: rcu_sched detected stalls on CPUs/tasks: { 2 7 8 9 10 11 12 13 14 15
16 17 18 19 20 21 22 23 2
4 25 26 27 28 29 30 31} (detected by 6, t=2102 jiffies, g=1617, c=1616,
q=1441)
and many other messages about lockups.
This issue was reported here:
http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
and was traced to
commit 7cba160ad789a powernv/cpuidle: Redesign idle states management,
which exposed the loophole in commit 5d1638acb9f6(tick: Introduce
hrtimer based broadcast) and is reported in the changelog of the patch.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists