[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <506BB283.4010800@linux.vnet.ibm.com>
Date: Wed, 03 Oct 2012 09:05:31 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To: Jiri Kosina <jkosina@...e.cz>
CC: "Paul E. McKenney" <paul.mckenney@...aro.org>,
Josh Triplett <josh@...htriplett.org>,
linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: Lockdep complains about commit 1331e7a1bb ("rcu: Remove _rcu_barrier()
dependency on __stop_machine()")
On 10/03/2012 03:47 AM, Jiri Kosina wrote:
> On Wed, 3 Oct 2012, Srivatsa S. Bhat wrote:
>
>> I don't see how this circular locking dependency can occur.. If you are using SLUB,
>> kmem_cache_destroy() releases slab_mutex before it calls rcu_barrier(). If you are
>> using SLAB, kmem_cache_destroy() wraps its whole operation inside get/put_online_cpus(),
>> which means, it cannot run concurrently with a hotplug operation such as cpu_up(). So, I'm
>> rather puzzled at this lockdep splat..
>
> I am using SLAB here.
>
> The scenario I think is very well possible:
>
>
> CPU 0 CPU 1
> kmem_cache_destroy()
What about the get_online_cpus() right here at CPU0 before
calling mutex_lock(slab_mutex)? How can the cpu_up() proceed
on CPU1?? I still don't get it... :(
(kmem_cache_destroy() uses get/put_online_cpus() around acquiring
and releasing slab_mutex).
Regards,
Srivatsa S. Bhat
> mutex_lock(slab_mutex)
> _cpu_up()
> cpu_hotplug_begin()
> mutex_lock(cpu_hotplug.lock)
> rcu_barrier()
> _rcu_barrier()
> get_online_cpus()
> mutex_lock(cpu_hotplug.lock)
> (blocks, CPU 1 has the mutex)
> __cpu_notify()
> mutex_lock(slab_mutex)
>
> Deadlock.
>
> Right?
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists