[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200905230023.16377.rjw@sisk.pl>
Date: Sat, 23 May 2009 00:23:15 +0200
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Johannes Berg <johannes@...solutions.net>
Cc: Oleg Nesterov <oleg@...hat.com>, Ingo Molnar <mingo@...e.hu>,
Zdenek Kabelac <zdenek.kabelac@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: INFO: possible circular locking dependency at cleanup_workqueue_thread
On Friday 22 May 2009, Johannes Berg wrote:
> On Tue, 2009-05-19 at 20:51 +0200, Oleg Nesterov wrote:
>
> > > > > Anyway, you can have a deadlock like this:
> > > > >
> > > > > CPU 3 CPU 2 CPU 1
> > > > > suspend/hibernate
> > > > > something:
> > > > > rtnl_lock() device_pm_lock()
> > > > > -> mutex_lock(&dpm_list_mtx)
> > > > >
> > > > > mutex_lock(&dpm_list_mtx)
> > > > >
> > > > > linkwatch_work
> > > > > -> rtnl_lock()
> > > > > disable_nonboot_cpus()
> > > >
> > > > let's suppose disable_nonboot_cpus() does not take cpu_add_remove_lock,
> > > >
> > > > > -> flush CPU 3 workqueue
> > > >
> > > > in this case the deadlock is still here?
> > > >
> > > > We can't flush because we hold the lock (dpm_list_mtx) which depends
> > > > on another lock taken by work->func(), the "classical" bug with flush.
> > > >
> > > > No?
> > >
> > > Yeah, it looks like cpu_add_remove_lock doesn't make a difference...
> > > It's just lockdep reporting a longer chain that also leads to a
> > > deadlock.
> >
> > So. we should not call cpu_down/disable_nonboot_cpus under device_pm_lock().
> >
> > At first glance this was changed by
> >
> > PM: Change hibernation code ordering
> > 4aecd6718939eb3c4145b248369b65f7483a8a02
> >
> > PM: Change suspend code ordering
> > 900af0d973856d6feb6fc088c2d0d3fde57707d3
> >
> > commits. Rafael, could you take a look?
>
> I just arrived at the same conclusion, heh. I can't say I understand
> these changes though, the part about calling the platform differently
> may make sense, but calling why disable non-boot CPUs at a different
> place?
Because the ordering of platform callbacks and cpu[_up()|_down()] is also
important, at least on resume.
In principle we can call device_pm_unlock() right before calling
disable_nonboot_cpus() and take the lock again right after calling
enable_nonboot_cpus(), if that helps.
Thanks,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists