[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160215171845.GZ11240@phenom.ffwll.local>
Date: Mon, 15 Feb 2016 18:18:45 +0100
From: Daniel Vetter <daniel@...ll.ch>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
"Gautham R. Shenoy" <ego@...ux.vnet.ibm.com>,
Intel graphics driver community testing & development
<intel-gfx@...ts.freedesktop.org>,
Linux kernel development <linux-kernel@...r.kernel.org>,
David Hildenbrand <dahi@...ux.vnet.ibm.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [Intel-gfx] [PATCH] [RFC] kernel/cpu: Use lockref for online CPU
reference counting
On Mon, Feb 15, 2016 at 03:17:55PM +0100, Peter Zijlstra wrote:
> On Mon, Feb 15, 2016 at 02:36:43PM +0200, Joonas Lahtinen wrote:
> > Instead of implementing a custom locked reference counting, use lockref.
> >
> > Current implementation leads to a deadlock splat on Intel SKL platforms
> > when lockdep debugging is enabled.
> >
> > This is due to few of CPUfreq drivers (including Intel P-state) having this;
> > policy->rwsem is locked during driver initialization and the functions called
> > during init that actually apply CPU limits use get_online_cpus (because they
> > have other calling paths too), which will briefly lock cpu_hotplug.lock to
> > increase cpu_hotplug.refcount.
> >
> > On later calling path, when doing a suspend, when cpu_hotplug_begin() is called
> > in disable_nonboot_cpus(), callbacks to CPUfreq functions get called after,
> > which will lock policy->rwsem and cpu_hotplug.lock is already held by
> > cpu_hotplug_begin() and we do have a potential deadlock scenario reported by
> > our CI system (though it is a very unlikely one). See the Bugzilla link for more
> > details.
>
> I've been meaning to change the thing into a percpu-rwsem, I just
> haven't had time to look into the lockdep splat that generated.
I've thrown Joonas patch into a local topic branch to shut up the noise in
our CI, and it seems to be effective at that (2 runs thus far). I'll drop
this again once we have a proper solution (whatever it'll be) upstream.
Cheers, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists