lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160916140403.GA7764@yexl-desktop>
Date:   Fri, 16 Sep 2016 22:04:03 +0800
From:   kernel test robot <xiaolong.ye@...el.com>
To:     Christophe JAILLET <christophe.jaillet@...adoo.fr>
Cc:     davem@...emloft.net, kuznet@....inr.ac.ru, jmorris@...ei.org,
        yoshfuji@...ux-ipv6.org, kaber@...sh.net, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org, kernel-janitors@...r.kernel.org,
        Christophe JAILLET <christophe.jaillet@...adoo.fr>, lkp@...org
Subject: [lkp] [net]  70a8118a03: BUG: workqueue leaked lock or atomic:
 kworker/0:1/0x00000000/28


FYI, we noticed the following commit:

https://github.com/0day-ci/linux Christophe-JAILLET/net-inet-diag-Fix-an-error-handling/20160912-140503
commit 70a8118a03243de2aba508d79cc1a042db094191 ("net: inet: diag: Fix an error handling")

in testcase: boot

on test machine: qemu-system-x86_64 -enable-kvm -smp 2 -m 512M

caused below changes:


+----------------------------------------------------+------------+------------+
|                                                    | 373df3131a | 70a8118a03 |
+----------------------------------------------------+------------+------------+
| boot_successes                                     | 6          | 3          |
| boot_failures                                      | 17         | 19         |
| BUG:unable_to_handle_kernel                        | 2          |            |
| Oops                                               | 2          |            |
| calltrace:compat_SyS_ipc                           | 2          |            |
| Kernel_panic-not_syncing:Fatal_exception           | 2          |            |
| invoked_oom-killer:gfp_mask=0x                     | 4          | 2          |
| Mem-Info                                           | 4          | 2          |
| BUG:kernel_reboot-without-warning_in_test_stage    | 11         | 5          |
| Out_of_memory:Kill_process                         | 1          | 1          |
| BUG:kernel_hang_in_test_stage                      | 0          | 2          |
| BUG:workqueue_leaked_lock_or_atomic:kworker        | 0          | 11         |
| calltrace:dump_stack                               | 0          | 11         |
| INFO:possible_circular_locking_dependency_detected | 0          | 11         |
| calltrace:ret_from_fork                            | 0          | 11         |
| calltrace:sock_diag_broadcast_destroy_work         | 0          | 11         |
| calltrace:lock_acquire                             | 0          | 11         |
| calltrace:inet_diag_lock_handler                   | 0          | 11         |
+----------------------------------------------------+------------+------------+



[   34.367674] init: tty3 main process ended, respawning
[   34.444537] init: tty6 main process (356) terminated with status 1
[   34.445711] init: tty6 main process ended, respawning
[   34.657943] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/28
[   34.657943]      last function: sock_diag_broadcast_destroy_work
[   34.674672] 1 lock held by kworker/0:1/28:
[   34.675402]  #0:  (inet_diag_table_mutex){+.+...}, at: [<ffffffff9dee5f51>] inet_diag_lock_handler+0x4e/0x6b
[   34.685206] CPU: 0 PID: 28 Comm: kworker/0:1 Not tainted 4.8.0-rc4-00239-g70a8118 #2
[   34.686489] Workqueue: sock_diag_events sock_diag_broadcast_destroy_work
[   34.688013]  0000000000000000 ffff88001bc9bc68 ffffffff9dcce3fa ffff88001bc94740
[   34.689383]  ffff88001d614cc0 ffff88001bc7ab40 ffff88001bc94740 ffff88001bc9bd48
[   34.690737]  ffffffff9dac0997 ffffffff9dac088f ffff88001bc94740 ffffe8ffffc02f05
[   34.691995] Call Trace:
[   34.692409]  [<ffffffff9dcce3fa>] dump_stack+0x89/0xcb
[   34.693268]  [<ffffffff9dac0997>] process_one_work+0x2e4/0x414
[   34.694280]  [<ffffffff9dac088f>] ? process_one_work+0x1dc/0x414
[   34.695115] trinity-main (440) used greatest stack depth: 10480 bytes left
[   34.697086]  [<ffffffff9dac0b1a>] ? worker_thread+0x53/0x3ea
[   34.698036]  [<ffffffff9dac0d82>] worker_thread+0x2bb/0x3ea
[   34.699007]  [<ffffffff9dfd51ad>] ? _raw_spin_unlock_irqrestore+0x42/0x64
[   34.700143]  [<ffffffff9dac0ac7>] ? process_one_work+0x414/0x414
[   34.701154]  [<ffffffff9dac0ac7>] ? process_one_work+0x414/0x414
[   34.702154]  [<ffffffff9dfd00ee>] ? schedule+0x9f/0xb4
[   34.703184]  [<ffffffff9dac0ac7>] ? process_one_work+0x414/0x414
[   34.704158]  [<ffffffff9dac566f>] kthread+0xe6/0xee
[   34.704956]  [<ffffffff9dfd55cf>] ret_from_fork+0x1f/0x40
[   34.705848]  [<ffffffff9dac5589>] ? __init_kthread_worker+0x59/0x59
[   34.742163] 
[   34.746911] ======================================================
[   34.747955] [ INFO: possible circular locking dependency detected ]
[   34.749010] 4.8.0-rc4-00239-g70a8118 #2 Not tainted
[   34.749842] -------------------------------------------------------
[   34.751043] kworker/0:1/28 is trying to acquire lock:
[   34.751905]  ((&bsk->work)){+.+.+.}, at: [<ffffffff9dac088f>] process_one_work+0x1dc/0x414
[   34.753383] 
[   34.753383] but task is already holding lock:
[   34.754357]  (inet_diag_table_mutex){+.+...}, at: [<ffffffff9dee5f51>] inet_diag_lock_handler+0x4e/0x6b
[   34.756018] 
[   34.756018] which lock already depends on the new lock.
[   34.756018] 
[   34.757342] 
[   34.757342] the existing dependency chain (in reverse order) is:
[   34.758710] 
-> #1 (inet_diag_table_mutex){+.+...}:
[   34.759634]        [<ffffffff9daed6cf>] validate_chain+0x5ac/0x6d5
[   34.760629]        [<ffffffff9daedc2c>] __lock_acquire+0x434/0x4e8
[   34.761608]        [<ffffffff9daee024>] __lock_release+0x287/0x309
[   34.762616]        [<ffffffff9daee105>] lock_release+0x5f/0x93
[   34.763557]        [<ffffffff9dfd0e15>] __mutex_unlock_slowpath+0xef/0x175
[   34.764637]        [<ffffffff9dfd0f28>] mutex_unlock+0x9/0xb
[   34.765523]        [<ffffffff9de6d843>] sock_diag_broadcast_destroy_work+0xea/0x134
[   34.766836]        [<ffffffff9dac08f9>] process_one_work+0x246/0x414
[   34.767827]        [<ffffffff9dac0d82>] worker_thread+0x2bb/0x3ea
[   34.768809]        [<ffffffff9dac566f>] kthread+0xe6/0xee
[   34.769704]        [<ffffffff9dfd55cf>] ret_from_fork+0x1f/0x40
[   34.770715] 
-> #0 ((&bsk->work)){+.+.+.}:
[   34.771490]        [<ffffffff9daecc1e>] check_prev_add+0x114/0x619
[   34.772515]        [<ffffffff9daed6cf>] validate_chain+0x5ac/0x6d5
[   34.773547]        [<ffffffff9daedc2c>] __lock_acquire+0x434/0x4e8
[   34.774717]        [<ffffffff9daedd7c>] lock_acquire+0x9c/0xbd
[   34.775681]        [<ffffffff9dac08f3>] process_one_work+0x240/0x414
[   34.776760]        [<ffffffff9dac0d82>] worker_thread+0x2bb/0x3ea
[   34.777802]        [<ffffffff9dac566f>] kthread+0xe6/0xee
[   34.778760]        [<ffffffff9dfd55cf>] ret_from_fork+0x1f/0x40
[   34.779776] 
[   34.779776] other info that might help us debug this:
[   34.779776] 
[   34.781109]  Possible unsafe locking scenario:
[   34.781109] 
[   34.782101]        CPU0                    CPU1
[   34.782980]        ----                    ----
[   34.783713]   lock(inet_diag_table_mutex);
[   34.784422]                                lock((&bsk->work));
[   34.785399]                                lock(inet_diag_table_mutex);
[   34.786381]   lock((&bsk->work));
[   34.786920] 
[   34.786920]  *** DEADLOCK ***
[   34.786920] 
[   34.787763] 2 locks held by kworker/0:1/28:
[   34.788370]  #0:  (inet_diag_table_mutex){+.+...}, at: [<ffffffff9dee5f51>] inet_diag_lock_handler+0x4e/0x6b
[   34.789850]  #1:  ("sock_diag_events"){.+.+.+}, at: [<ffffffff9dac088f>] process_one_work+0x1dc/0x414
[   34.791363] 
[   34.791363] stack backtrace:
[   34.791996] CPU: 0 PID: 28 Comm: kworker/0:1 Not tainted 4.8.0-rc4-00239-g70a8118 #2
[   34.793098] Workqueue: sock_diag_events sock_diag_broadcast_destroy_work
[   34.794066]  0000000000000000 ffff88001bc9b9d8 ffffffff9dcce3fa ffff88001bc94740
[   34.795344]  0000000000000000 0000000000000000 0000000000000000 ffff88001bc9ba28
[   34.796669]  ffffffff9daec947 ffff88001bc9ba48 ffff88001bc9ba48 ffff88001bc9ba28
[   34.797978] Call Trace:
[   34.798408]  [<ffffffff9dcce3fa>] dump_stack+0x89/0xcb
[   34.799425]  [<ffffffff9daec947>] print_circular_bug+0xcf/0xe0
[   34.800409]  [<ffffffff9daecc1e>] check_prev_add+0x114/0x619
[   34.801358]  [<ffffffff9daed6cf>] validate_chain+0x5ac/0x6d5
[   34.802324]  [<ffffffff9daedc2c>] __lock_acquire+0x434/0x4e8
[   34.803309]  [<ffffffff9daedd7c>] lock_acquire+0x9c/0xbd
[   34.804205]  [<ffffffff9dac088f>] ? process_one_work+0x1dc/0x414
[   34.805226]  [<ffffffff9dac08f3>] process_one_work+0x240/0x414
[   34.806205]  [<ffffffff9dac088f>] ? process_one_work+0x1dc/0x414
[   34.807366]  [<ffffffff9dac0b1a>] ? worker_thread+0x53/0x3ea
[   34.808317]  [<ffffffff9dac0d82>] worker_thread+0x2bb/0x3ea
[   34.809266]  [<ffffffff9dfd51ad>] ? _raw_spin_unlock_irqrestore+0x42/0x64
[   34.810331]  [<ffffffff9dac0ac7>] ? process_one_work+0x414/0x414
[   34.811280]  [<ffffffff9dac0ac7>] ? process_one_work+0x414/0x414
[   34.812236]  [<ffffffff9dfd00ee>] ? schedule+0x9f/0xb4
[   34.813064]  [<ffffffff9dac0ac7>] ? process_one_work+0x414/0x414
[   34.813983]  [<ffffffff9dac566f>] kthread+0xe6/0xee
[   34.814798]  [<ffffffff9dfd55cf>] ret_from_fork+0x1f/0x40
[   34.815551]  [<ffffffff9dac5589>] ? __init_kthread_worker+0x59/0x59
[   42.173507] ls /sys/class/net
[   42.216524] lo





Thanks,
Xiaolong

View attachment "config-4.8.0-rc4-00239-g70a8118" of type "text/plain" (87510 bytes)

View attachment "job-script" of type "text/plain" (3876 bytes)

Download attachment "dmesg.xz" of type "application/octet-stream" (12772 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ