lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 24 May 2009 20:58:46 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Ming Lei <tom.leiming@...il.com>
Cc:	Johannes Berg <johannes@...solutions.net>,
	Ingo Molnar <mingo@...e.hu>,
	Zdenek Kabelac <zdenek.kabelac@...il.com>,
	"Rafael J. Wysocki" <rjw@...k.pl>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Gautham R Shenoy <ego@...ibm.com>,
	Oleg Nesterov <onestero@...hat.com>
Subject: Re: INFO: possible circular locking dependency at
 cleanup_workqueue_thread

Below are the original lockdep output and the one generated by applying
Lei Ming's BFS shortest cycle patch.

It appears to find a slightly shorter variant, removing setup_lock from
the cycle -- but it might be a difference in setup or userland.

Looking again at Oleg's example, I think this again falls short of
finding the L1-L2 inversion, simply because we establish (and therefore
find) the longer cycle first.

Simply because we warn at the first cycle detected, it means we'll never
continue to build dependencies to build shorter ones,.. I think?

/me goes trying to construct a scenario to disprove the above.

On Tue, 2009-05-12 at 09:59 +0200, Zdenek Kabelac wrote:
> =======================================================
> [ INFO: possible circular locking dependency detected ]
> 2.6.30-rc5-00097-gd665355 #59
> -------------------------------------------------------
> pm-suspend/12129 is trying to acquire lock:
>  (events){+.+.+.}, at: [<ffffffff80259496>] cleanup_workqueue_thread+0x26/0xd0
> 
> but task is already holding lock:
>  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff80246e57>]
> cpu_maps_update_begin+0x17/0x20
> 
> which lock already depends on the new lock.
> 
> 
> the existing dependency chain (in reverse order) is:
> 
> -> #5 (cpu_add_remove_lock){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffff80246e57>] cpu_maps_update_begin+0x17/0x20
>        [<ffffffff80259c33>] __create_workqueue_key+0xc3/0x250
>        [<ffffffff80287b20>] stop_machine_create+0x40/0xb0
>        [<ffffffff8027a784>] sys_delete_module+0x84/0x270
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #4 (setup_lock){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffff80287af7>] stop_machine_create+0x17/0xb0
>        [<ffffffff80246f06>] disable_nonboot_cpus+0x26/0x130
>        [<ffffffff8027dd8d>] suspend_devices_and_enter+0xbd/0x1b0
>        [<ffffffff8027dff7>] enter_state+0x107/0x170
>        [<ffffffff8027e0f9>] state_store+0x99/0x100
>        [<ffffffff803abe17>] kobj_attr_store+0x17/0x20
>        [<ffffffff8033f1e9>] sysfs_write_file+0xd9/0x160
>        [<ffffffff802e1c88>] vfs_write+0xb8/0x180
>        [<ffffffff802e2771>] sys_write+0x51/0x90
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #3 (dpm_list_mtx){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffff804532ff>] device_pm_add+0x1f/0xe0
>        [<ffffffff8044b9bf>] device_add+0x45f/0x570
>        [<ffffffffa007c578>] wiphy_register+0x158/0x280 [cfg80211]
>        [<ffffffffa017567c>] ieee80211_register_hw+0xbc/0x410 [mac80211]
>        [<ffffffffa01f7c5c>] iwl3945_pci_probe+0xa1c/0x1080 [iwl3945]
>        [<ffffffff803c4307>] local_pci_probe+0x17/0x20
>        [<ffffffff803c5458>] pci_device_probe+0x88/0xb0
>        [<ffffffff8044e1e9>] driver_probe_device+0x89/0x180
>        [<ffffffff8044e37b>] __driver_attach+0x9b/0xa0
>        [<ffffffff8044d67c>] bus_for_each_dev+0x6c/0xa0
>        [<ffffffff8044e03e>] driver_attach+0x1e/0x20
>        [<ffffffff8044d955>] bus_add_driver+0xd5/0x290
>        [<ffffffff8044e668>] driver_register+0x78/0x140
>        [<ffffffff803c56f6>] __pci_register_driver+0x66/0xe0
>        [<ffffffffa00bd05c>] 0xffffffffa00bd05c
>        [<ffffffff8020904f>] do_one_initcall+0x3f/0x1c0
>        [<ffffffff8027d071>] sys_init_module+0xb1/0x200
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #2 (cfg80211_mutex){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff8054e78c>] __mutex_lock_common+0x4c/0x3b0
>        [<ffffffff8054ebf6>] mutex_lock_nested+0x46/0x60
>        [<ffffffffa007e66a>] reg_todo+0x19a/0x590 [cfg80211]
>        [<ffffffff80258f18>] worker_thread+0x1e8/0x3a0
>        [<ffffffff8025dc3a>] kthread+0x5a/0xa0
>        [<ffffffff8020d23a>] child_rip+0xa/0x20
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #1 (reg_work){+.+.+.}:
>        [<ffffffff80271a64>] __lock_acquire+0xc64/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff80258f12>] worker_thread+0x1e2/0x3a0
>        [<ffffffff8025dc3a>] kthread+0x5a/0xa0
>        [<ffffffff8020d23a>] child_rip+0xa/0x20
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> -> #0 (events){+.+.+.}:
>        [<ffffffff80271b36>] __lock_acquire+0xd36/0x10a0
>        [<ffffffff80271f38>] lock_acquire+0x98/0x140
>        [<ffffffff802594bd>] cleanup_workqueue_thread+0x4d/0xd0
>        [<ffffffff8053fe79>] workqueue_cpu_callback+0xc9/0x10f
>        [<ffffffff805539a8>] notifier_call_chain+0x68/0xa0
>        [<ffffffff802630d6>] raw_notifier_call_chain+0x16/0x20
>        [<ffffffff8053d9ec>] _cpu_down+0x1cc/0x2d0
>        [<ffffffff80246f90>] disable_nonboot_cpus+0xb0/0x130
>        [<ffffffff8027dd8d>] suspend_devices_and_enter+0xbd/0x1b0
>        [<ffffffff8027dff7>] enter_state+0x107/0x170
>        [<ffffffff8027e0f9>] state_store+0x99/0x100
>        [<ffffffff803abe17>] kobj_attr_store+0x17/0x20
>        [<ffffffff8033f1e9>] sysfs_write_file+0xd9/0x160
>        [<ffffffff802e1c88>] vfs_write+0xb8/0x180
>        [<ffffffff802e2771>] sys_write+0x51/0x90
>        [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b
>        [<ffffffffffffffff>] 0xffffffffffffffff
> 
> other info that might help us debug this:
> 
> 4 locks held by pm-suspend/12129:
>  #0:  (&buffer->mutex){+.+.+.}, at: [<ffffffff8033f154>]
> sysfs_write_file+0x44/0x160
>  #1:  (pm_mutex){+.+.+.}, at: [<ffffffff8027df44>] enter_state+0x54/0x170
>  #2:  (dpm_list_mtx){+.+.+.}, at: [<ffffffff80452747>] device_pm_lock+0x17/0x20
>  #3:  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff80246e57>]
> cpu_maps_update_begin+0x17/0x20
> 
> stack backtrace:
> Pid: 12129, comm: pm-suspend Not tainted 2.6.30-rc5-00097-gd665355 #59
> Call Trace:
>  [<ffffffff8026fa3d>] print_circular_bug_tail+0x9d/0xe0
>  [<ffffffff80271b36>] __lock_acquire+0xd36/0x10a0
>  [<ffffffff80270000>] ? mark_lock+0x3e0/0x400
>  [<ffffffff80271f38>] lock_acquire+0x98/0x140
>  [<ffffffff80259496>] ? cleanup_workqueue_thread+0x26/0xd0
>  [<ffffffff802594bd>] cleanup_workqueue_thread+0x4d/0xd0
>  [<ffffffff80259496>] ? cleanup_workqueue_thread+0x26/0xd0
>  [<ffffffff802703ed>] ? trace_hardirqs_on+0xd/0x10
>  [<ffffffff8053fe79>] workqueue_cpu_callback+0xc9/0x10f
>  [<ffffffff8054acf1>] ? cpu_callback+0x12/0x280
>  [<ffffffff805539a8>] notifier_call_chain+0x68/0xa0
>  [<ffffffff802630d6>] raw_notifier_call_chain+0x16/0x20
>  [<ffffffff8053d9ec>] _cpu_down+0x1cc/0x2d0
>  [<ffffffff80246f90>] disable_nonboot_cpus+0xb0/0x130
>  [<ffffffff8027dd8d>] suspend_devices_and_enter+0xbd/0x1b0
>  [<ffffffff8027dff7>] enter_state+0x107/0x170
>  [<ffffffff8027e0f9>] state_store+0x99/0x100
>  [<ffffffff803abe17>] kobj_attr_store+0x17/0x20
>  [<ffffffff8033f1e9>] sysfs_write_file+0xd9/0x160
>  [<ffffffff802e1c88>] vfs_write+0xb8/0x180
>  [<ffffffff8029088c>] ? audit_syscall_entry+0x21c/0x240
>  [<ffffffff802e2771>] sys_write+0x51/0x90
>  [<ffffffff8020c15b>] system_call_fastpath+0x16/0x1b


=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.30-rc6-tip #9
-------------------------------------------------------
bash/6174 is trying to acquire lock:
 (events){+.+.+.}, at: [<ffffffff81059076>] cleanup_workqueue_thread+0x28/0x10a

but task is already holding lock:
 (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff81046c26>] disable_nonboot_cpus+0x38/0x128

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (cpu_add_remove_lock){+.+.+.}:
       [<ffffffff8106f74d>] __lock_acquire+0xa80/0xc08
       [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
       [<ffffffff813553fe>] __mutex_lock_common+0x5e/0x494
       [<ffffffff81355883>] mutex_lock_nested+0x19/0x1b
       [<ffffffff81046c26>] disable_nonboot_cpus+0x38/0x128
       [<ffffffff8107b38e>] suspend_devices_and_enter+0xf5/0x1f4
       [<ffffffff8107b623>] enter_state+0x168/0x1ce
       [<ffffffff8107b745>] state_store+0xbc/0xdd
       [<ffffffff811737fb>] kobj_attr_store+0x17/0x19
       [<ffffffff81136620>] sysfs_write_file+0xe9/0x11e
       [<ffffffff810e2422>] vfs_write+0xb0/0x10a
       [<ffffffff810e254a>] sys_write+0x4c/0x75
       [<ffffffff8100bdf2>] system_call_fastpath+0x16/0x1b
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #3 (dpm_list_mtx){+.+.+.}:
       [<ffffffff8106f74d>] __lock_acquire+0xa80/0xc08
       [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
       [<ffffffff813553fe>] __mutex_lock_common+0x5e/0x494
       [<ffffffff81355883>] mutex_lock_nested+0x19/0x1b
       [<ffffffff812312fe>] device_pm_add+0x23/0xcd
       [<ffffffff8122ad87>] device_add+0x38b/0x549
       [<ffffffff8130de59>] wiphy_register+0x139/0x1ee
       [<ffffffff81317dff>] ieee80211_register_hw+0xee/0x3bf
       [<ffffffff8126163a>] iwl_setup_mac+0x8b/0xd1
       [<ffffffff8126fbcd>] iwl_pci_probe+0x7f5/0x921
       [<ffffffff81188d21>] local_pci_probe+0x17/0x1b
       [<ffffffff81059262>] do_work_for_cpu+0x18/0x2a
       [<ffffffff8105dc98>] kthread+0x5b/0x88
       [<ffffffff8100cf7a>] child_rip+0xa/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #2 (cfg80211_mutex){+.+.+.}:
       [<ffffffff8106f74d>] __lock_acquire+0xa80/0xc08
       [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
       [<ffffffff813553fe>] __mutex_lock_common+0x5e/0x494
       [<ffffffff81355883>] mutex_lock_nested+0x19/0x1b
       [<ffffffff8130f52a>] reg_todo+0x53/0x490
       [<ffffffff81058870>] worker_thread+0x250/0x3dc
       [<ffffffff8105dc98>] kthread+0x5b/0x88
       [<ffffffff8100cf7a>] child_rip+0xa/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #1 (reg_work){+.+.+.}:
       [<ffffffff8106f74d>] __lock_acquire+0xa80/0xc08
       [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
       [<ffffffff810587f7>] worker_thread+0x1d7/0x3dc
       [<ffffffff8105dc98>] kthread+0x5b/0x88
       [<ffffffff8100cf7a>] child_rip+0xa/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #0 (events){+.+.+.}:
       [<ffffffff8106f641>] __lock_acquire+0x974/0xc08
       [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
       [<ffffffff8105909d>] cleanup_workqueue_thread+0x4f/0x10a
       [<ffffffff81343fe4>] workqueue_cpu_callback+0xc9/0x10d
       [<ffffffff81359cfd>] notifier_call_chain+0x33/0x5b
       [<ffffffff81062254>] raw_notifier_call_chain+0x14/0x16
       [<ffffffff81342624>] _cpu_down+0x283/0x2a0
       [<ffffffff81046c6b>] disable_nonboot_cpus+0x7d/0x128
       [<ffffffff8107b38e>] suspend_devices_and_enter+0xf5/0x1f4
       [<ffffffff8107b623>] enter_state+0x168/0x1ce
       [<ffffffff8107b745>] state_store+0xbc/0xdd
       [<ffffffff811737fb>] kobj_attr_store+0x17/0x19
       [<ffffffff81136620>] sysfs_write_file+0xe9/0x11e
       [<ffffffff810e2422>] vfs_write+0xb0/0x10a
       [<ffffffff810e254a>] sys_write+0x4c/0x75
       [<ffffffff8100bdf2>] system_call_fastpath+0x16/0x1b
       [<ffffffffffffffff>] 0xffffffffffffffff

other info that might help us debug this:

4 locks held by bash/6174:
 #0:  (&buffer->mutex){+.+.+.}, at: [<ffffffff81136574>] sysfs_write_file+0x3d/0x11e
 #1:  (pm_mutex){+.+.+.}, at: [<ffffffff8107b680>] enter_state+0x1c5/0x1ce
 #2:  (dpm_list_mtx){+.+.+.}, at: [<ffffffff8123084d>] device_pm_lock+0x17/0x19
 #3:  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff81046c26>] disable_nonboot_cpus+0x38/0x128

stack backtrace:
Pid: 6174, comm: bash Not tainted 2.6.30-rc6-tip #9
Call Trace:
 [<ffffffff8106e92d>] print_circular_bug+0x1cc/0x201
 [<ffffffff8106f641>] __lock_acquire+0x974/0xc08
 [<ffffffff8106f9dd>] lock_acquire+0x108/0x134
 [<ffffffff81059076>] ? cleanup_workqueue_thread+0x28/0x10a
 [<ffffffff8105909d>] cleanup_workqueue_thread+0x4f/0x10a
 [<ffffffff81059076>] ? cleanup_workqueue_thread+0x28/0x10a
 [<ffffffff81343fe4>] workqueue_cpu_callback+0xc9/0x10d
 [<ffffffff81359cfd>] notifier_call_chain+0x33/0x5b
 [<ffffffff81062254>] raw_notifier_call_chain+0x14/0x16
 [<ffffffff81342624>] _cpu_down+0x283/0x2a0
 [<ffffffff81046c6b>] disable_nonboot_cpus+0x7d/0x128
 [<ffffffff8107b38e>] suspend_devices_and_enter+0xf5/0x1f4
 [<ffffffff8107b623>] enter_state+0x168/0x1ce
 [<ffffffff8107b745>] state_store+0xbc/0xdd
 [<ffffffff811737fb>] kobj_attr_store+0x17/0x19
 [<ffffffff81136620>] sysfs_write_file+0xe9/0x11e
 [<ffffffff810e2422>] vfs_write+0xb0/0x10a
 [<ffffffff810e254a>] sys_write+0x4c/0x75
 [<ffffffff8100bdf2>] system_call_fastpath+0x16/0x1b


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ