[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BB420D6.7050401@redhat.com>
Date: Thu, 01 Apr 2010 12:28:06 +0800
From: Cong Wang <amwang@...hat.com>
To: Tejun Heo <tj@...nel.org>
CC: Oleg Nesterov <oleg@...hat.com>, linux-kernel@...r.kernel.org,
Rusty Russell <rusty@...tcorp.com.au>,
akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>
Subject: Re: [Patch] workqueue: move lockdep annotations up to destroy_workqueue()
Tejun Heo wrote:
> Hello,
>
> On 04/01/2010 01:09 PM, Cong Wang wrote:
>>> This seems to be from the original thread of frame#3. It's grabbing
>>> wq lock here but the problem is that the lock will be released
>>> immediately, so bond_dev->name (the wq) can't be held by the time it
>>> reaches frame#3. How is this dependency chain completed? Is it
>>> somehow transitive through rtnl_mutex?
>> wq lock is held *after* cpu_add_remove_lock, lockdep also said this,
>> the process is trying to hold wq lock while having cpu_add_remove_lock.
>
> Yeah yeah, I'm just failing to see how the other direction is
> completed. ie. where does the kernel try to grab cpu_add_remove_lock
> *after* grabbing wq lock?
>
>>> Isn't there a circular dependency here? bonding_exit() calls
>>> destroy_workqueue() under rtnl_mutex but destroy_workqueue() should
>>> flush works which could be trying to grab rtnl_lock. Or am I
>>> completely misunderstanding locking here?
>> Sure, that is why I sent another patch for bonding. :)
>
> Ah... great. :-)
>
>> After this patch, another lockdep warning appears, it is exactly what
>> you expect.
>
> Hmmm... can you please try to see whether this circular locking
> warning involving wq->lockdep_map is reproducible w/ the bonding
> locking fixed? I still can't see where wq -> cpu_add_remove_lock
> dependency is created.
>
I thought this is obvious.
Here it is:
void destroy_workqueue(struct workqueue_struct *wq)
{
const struct cpumask *cpu_map = wq_cpu_map(wq);
int cpu;
cpu_maps_update_begin(); <----------------- Hold cpu_add_remove_lock here
spin_lock(&workqueue_lock);
list_del(&wq->list);
spin_unlock(&workqueue_lock);
for_each_cpu(cpu, cpu_map)
cleanup_workqueue_thread(per_cpu_ptr(wq->cpu_wq, cpu)); <------ See below
cpu_maps_update_done(); <----------------- Release cpu_add_remove_lock here
...
static void cleanup_workqueue_thread(struct cpu_workqueue_struct *cwq)
{
/*
* Our caller is either destroy_workqueue() or CPU_POST_DEAD,
* cpu_add_remove_lock protects cwq->thread.
*/
if (cwq->thread == NULL)
return;
lock_map_acquire(&cwq->wq->lockdep_map); <-------------- Lockdep complains here.
lock_map_release(&cwq->wq->lockdep_map);
...
Am I missing something??
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists