[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141022210943.GL4977@linux.vnet.ibm.com>
Date: Wed, 22 Oct 2014 14:09:43 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Jiri Kosina <jkosina@...e.cz>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Pavel Machek <pavel@....cz>,
Steven Rostedt <rostedt@...dmis.org>,
Dave Jones <davej@...hat.com>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Nicolas Pitre <nico@...aro.org>, linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org
Subject: Re: lockdep splat in CPU hotplug
On Wed, Oct 22, 2014 at 10:57:25PM +0200, Jiri Kosina wrote:
> On Wed, 22 Oct 2014, Paul E. McKenney wrote:
>
> > rcu: More on deadlock between CPU hotplug and expedited grace periods
> >
> > Commit dd56af42bd82 (rcu: Eliminate deadlock between CPU hotplug and
> > expedited grace periods) was incomplete. Although it did eliminate
> > deadlocks involving synchronize_sched_expedited()'s acquisition of
> > cpu_hotplug.lock via get_online_cpus(), it did nothing about the similar
> > deadlock involving acquisition of this same lock via put_online_cpus().
> > This deadlock became apparent with testing involving hibernation.
> >
> > This commit therefore changes put_online_cpus() acquisition of this lock
> > to be conditional, and increments a new cpu_hotplug.puts_pending field
> > in case of acquisition failure. Then cpu_hotplug_begin() checks for this
> > new field being non-zero, and applies any changes to cpu_hotplug.refcount.
> >
>
> Yes, this works. FWIW, please feel free to add
>
> Reported-and-tested-by: Jiri Kosina <jkosina@...e.cz>
>
> once merging it.
Done, and thank you for both the bug report and the testing!
> Why lockdep produced such an incomplete stacktrace still remains
> unexplained.
On that, I must defer to people more familiar with stack frames.
Thanx, Paul
> Thanks,
>
> --
> Jiri Kosina
> SUSE Labs
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists