lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 21 Nov 2017 20:50:47 +0800
From:   Fengguang Wu <fengguang.wu@...el.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Chris Wilson <chris@...is-wilson.co.uk>,
        linux-kernel@...r.kernel.org, lkp@...org
Subject: [test_cycle_work] WARNING: possible circular locking dependency
 detected

Hello,

FYI this happens in mainline kernel 4.14.0-06692-g5bbcc0f.
It looks like a new regression around 4.14-rc1.

It occurs in 2 out of 2 boots.

[    2.026528] spin_lock-torture: Creating lock_torture_stats task
[    2.027461] spin_lock-torture: lock_torture_writer task started
[    2.028686] spin_lock-torture: lock_torture_stats task started
[    2.531239]
[    2.531439] ======================================================
[    2.532005] WARNING: possible circular locking dependency detected
[    2.532570] 4.14.0-06692-g5bbcc0f #1 Not tainted
[    2.532979] ------------------------------------------------------
[    2.533515] kworker/u4:1/54 is trying to acquire lock:
[    2.533967]  (ww_class_mutex){+.+.}, at: [<44567940>] test_cycle_work+0xf0/0x460:
						test_cycle_work at kernel/locking/test-ww_mutex.c:268
[    2.534473]
[    2.534473] but now in release context of a crosslock acquired at the following:
[    2.534473]  ((completion)&cycle->b_signal){+.+.}, at: [<4539ccd5>] wait_for_completion+0x25/0x40:
						wait_for_completion at kernel/sched/completion.c:144
[    2.534473]
[    2.534473] which lock already depends on the new lock.
[    2.534473]
[    2.536776]
[    2.536776] the existing dependency chain (in reverse order) is:
[    2.536776]
[    2.536776] -> #1 ((completion)&cycle->b_signal){+.+.}:
[    2.536776]
[    2.536776] -> #0 (ww_class_mutex){+.+.}:
[    2.536776]        test_cycle_work+0xf0/0x460:
						test_cycle_work at kernel/locking/test-ww_mutex.c:268
[    2.536776]
[    2.536776] other info that might help us debug this:
[    2.536776]
[    2.536776]  Possible unsafe locking scenario by crosslock:
[    2.536776]
[    2.536776]        CPU0                    CPU1
[    2.536776]        ----                    ----
[    2.536776]   lock(ww_class_mutex);
[    2.536776]   lock((completion)&cycle->b_signal);
[    2.536776]                                lock(ww_class_mutex);
[    2.536776]                                unlock((completion)&cycle->b_signal);
[    2.536776]
[    2.536776]  *** DEADLOCK ***
[    2.536776]
[    2.536776] 5 locks held by kworker/u4:1/54:
[    2.536776]  #0:  ((wq_completion)"test-ww_mutex"){+.+.}, at: [<444d5dc4>] process_one_work+0x294/0x1210:
						process_one_work at kernel/workqueue.c:2083
[    2.536776]  #1:  ((work_completion)(&cycle->work)){+.+.}, at: [<444d5dc4>] process_one_work+0x294/0x1210:
						process_one_work at kernel/workqueue.c:2083
[    2.536776]  #2:  (ww_class_acquire){+.+.}, at: [<444d5f2d>] process_one_work+0x3fd/0x1210:
						__read_once_size at include/linux/compiler.h:178
						 (inlined by) atomic_read at arch/x86/include/asm/atomic.h:27
						 (inlined by) static_key_count at include/linux/jump_label.h:191
						 (inlined by) static_key_false at include/linux/jump_label.h:201
						 (inlined by) trace_workqueue_execute_end at include/trace/events/workqueue.h:112
						 (inlined by) process_one_work at kernel/workqueue.c:2117
[    2.536776]  #3:  (ww_class_mutex){+.+.}, at: [<44567940>] test_cycle_work+0xf0/0x460:
						test_cycle_work at kernel/locking/test-ww_mutex.c:268
[    2.536776]  #4:  (&x->wait#7){....}, at: [<4453c625>] complete+0x25/0xa0:
						complete at kernel/sched/completion.c:35
[    2.536776]
[    2.536776] stack backtrace:
[    2.536776] CPU: 1 PID: 54 Comm: kworker/u4:1 Not tainted 4.14.0-06692-g5bbcc0f #1
[    2.536776] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[    2.536776] Workqueue: test-ww_mutex test_cycle_work
[    2.550112] Call Trace:
[    2.550112]  ? dump_stack+0x153/0x1fa:
						__dump_stack at lib/dump_stack.c:17
						 (inlined by) dump_stack at lib/dump_stack.c:53
[    2.550112]  ? print_circular_bug+0x1d3/0x1f0:
						print_circular_bug at kernel/locking/lockdep.c:1272
[    2.550112]  ? check_prev_add+0xca/0xc90:
						check_prev_add at kernel/locking/lockdep.c:1915
[    2.550112]  ? __lock_acquire+0x697/0x1370:
						__lock_acquire at kernel/locking/lockdep.c:3503
[    2.550112]  ? commit_xhlock+0x330/0x5d0:
						commit_xhlock at kernel/locking/lockdep.c:5026
[    2.550112]  ? __print_lock_name+0x130/0x130:
						copy_trace at kernel/locking/lockdep.c:4986
[    2.550112]  ? lock_commit_crosslock+0x1c4/0x3c0:
						commit_xhlocks at kernel/locking/lockdep.c:5070
						 (inlined by) lock_commit_crosslock at kernel/locking/lockdep.c:5109
[    2.550112]  ? complete+0x3d/0xa0:
						complete at kernel/sched/completion.c:42
[    2.550112]  ? test_cycle_work+0x106/0x460:
						test_cycle_work at kernel/locking/test-ww_mutex.c:269
[    2.550112]  ? process_one_work+0x3fd/0x1210:
						__read_once_size at include/linux/compiler.h:178
						 (inlined by) atomic_read at arch/x86/include/asm/atomic.h:27
						 (inlined by) static_key_count at include/linux/jump_label.h:191
						 (inlined by) static_key_false at include/linux/jump_label.h:201
						 (inlined by) trace_workqueue_execute_end at include/trace/events/workqueue.h:112
						 (inlined by) process_one_work at kernel/workqueue.c:2117
[    2.550112]  ? worker_thread+0x3e3/0xae0:
						worker_thread at kernel/workqueue.c:2246
[    2.550112]  ? kthread+0x1ac/0x220:
						kthread at kernel/kthread.c:238
[    2.550112]  ? rescuer_thread+0x640/0x640:
						worker_thread at kernel/workqueue.c:2189
[    2.550112]  ? __kthread_bind+0x50/0x50:
						kthread at kernel/kthread.c:198
[    2.550112]  ? ret_from_fork+0x19/0x30:
						ret_from_fork at arch/x86/entry/entry_32.S:299
[    8.624965] torture_init_begin: Refusing rcu init: spin_lock running.
[    8.626961] torture_init_begin: One torture test at a time!
[    9.419528] Initialise system trusted keyrings
[    9.420475] workingset: timestamp_bits=30 max_order=17 bucket_order=0
[    9.421452] zbud: loaded

Attached the full dmesg, kconfig and reproduce scripts.

Thanks,
Fengguang

View attachment "dmesg-vm-lkp-hsw01-yocto-i386-16:20171116073239:i386-randconfig-c0-10271823:4.14.0-06692-g5bbcc0f:1" of type "text/plain" (58558 bytes)

View attachment ".config" of type "text/plain" (101310 bytes)

View attachment "job-script" of type "text/plain" (3769 bytes)

View attachment "reproduce-vm-lkp-hsw01-yocto-i386-16:20171116073239:i386-randconfig-c0-10271823:4.14.0-06692-g5bbcc0f:1" of type "text/plain" (1996 bytes)

Powered by blists - more mailing lists