lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BB4381C.8010800@redhat.com>
Date:	Thu, 01 Apr 2010 14:07:24 +0800
From:	Cong Wang <amwang@...hat.com>
To:	Tejun Heo <tj@...nel.org>
CC:	Oleg Nesterov <oleg@...hat.com>, linux-kernel@...r.kernel.org,
	Rusty Russell <rusty@...tcorp.com.au>,
	akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>
Subject: Re: [Patch] workqueue: move lockdep annotations up to	destroy_workqueue()

Cong Wang wrote:
> Cong Wang wrote:
>> Tejun Heo wrote:
>>> Hello,
>>>
>>> On 04/01/2010 01:28 PM, Cong Wang wrote:
>>>>> Hmmm... can you please try to see whether this circular locking
>>>>> warning involving wq->lockdep_map is reproducible w/ the bonding
>>>>> locking fixed?  I still can't see where wq -> cpu_add_remove_lock
>>>>> dependency is created.
>>>>>
>>>> I thought this is obvious.
>>>>
>>>> Here it is:
>>>>
>>>> void destroy_workqueue(struct workqueue_struct *wq)
>>>> {
>>>>         const struct cpumask *cpu_map = wq_cpu_map(wq);
>>>>         int cpu;
>>>>
>>>>         cpu_maps_update_begin();        <----------------- Hold
>>>> cpu_add_remove_lock here
>>>>         spin_lock(&workqueue_lock);
>>>>         list_del(&wq->list);
>>>>         spin_unlock(&workqueue_lock);
>>>>
>>>>         for_each_cpu(cpu, cpu_map)
>>>>                 cleanup_workqueue_thread(per_cpu_ptr(wq->cpu_wq, 
>>>> cpu)); <------ See below
>>>>         cpu_maps_update_done();        <----------------- Release
>>>> cpu_add_remove_lock here
>>>>
>>>> ...
>>>> static void cleanup_workqueue_thread(struct cpu_workqueue_struct *cwq)
>>>> {
>>>>         /*
>>>>          * Our caller is either destroy_workqueue() or CPU_POST_DEAD,
>>>>          * cpu_add_remove_lock protects cwq->thread.
>>>>          */
>>>>         if (cwq->thread == NULL)
>>>>                 return;
>>>>
>>>>         lock_map_acquire(&cwq->wq->lockdep_map); <-------------- 
>>>> Lockdep
>>>> complains here.
>>>>         lock_map_release(&cwq->wq->lockdep_map);
>>>> ...
>>>
>>> Yeap, the above is cpu_add_remove_lock -> wq->lockdep_map dependency.
>>> I can see that but I'm failing to see where the dependency the other
>>> direction is created.
>>>
>>
>> Hmm, it looks like I misunderstand lock_map_acquire()? From the 
>> changelog,
>> I thought it was added to complain its caller is holding a lock when 
>> invoking
>> it, thus cpu_add_remove_lock is not an exception.
>>
> 
> Oh, I see, wq->lockdep_map is acquired again in run_workqueue(), so I 
> was wrong. :)
> I think you and Oleg are right, the lockdep warning is not irrelevant.
> 

Oops, typo, I meant "is irrelevant." ;)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ