[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170119201303.GO6485@twins.programming.kicks-ass.net>
Date: Thu, 19 Jan 2017 21:13:03 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: mingo@...nel.org, paulmck@...ux.vnet.ibm.com,
torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
hpa@...or.com, chris@...is-wilson.co.uk, nhaehnle@...il.com,
linux-kernel@...r.kernel.org, dev@...ankhorst.nl,
tglx@...utronix.de
Cc: linux-tip-commits@...r.kernel.org
Subject: Re: [tip:locking/core] locking/ww_mutex: Add kselftests for ww_mutex
stress
Hi Chris,
I got me this... I'm still trying to wrap my head around what exactly
the test does.
=====================================
[ BUG: bad unlock balance detected! ]
4.10.0-rc4-00122-g6edad167-dirty #608 Not tainted
-------------------------------------
kworker/u81:37/330 is trying to release lock (ww_class_mutex) at:
[<ffffffff8115260a>] stress_inorder_work+0x15a/0x330
but there are no more locks to release!
other info that might help us debug this:
4 locks held by kworker/u81:37/330:
#0: ("test-ww_mutex"){++++.+}, at: [<ffffffff81111d62>] process_one_work+0x1f2/0x710
#1: ((&stress->work)){+.+.+.}, at: [<ffffffff81111d62>] process_one_work+0x1f2/0x710
#2: (ww_class_acquire){+.+...}, at: [<ffffffff81111ddd>] process_one_work+0x26d/0x710
#3: (ww_class_mutex){+.+...}, at: [<ffffffff811525ba>] stress_inorder_work+0x10a/0x330
stack backtrace:
CPU: 22 PID: 330 Comm: kworker/u81:37 Not tainted 4.10.0-rc4-00122-g6edad167-dirty #608
Hardware name: Intel Corporation S2600GZ/S2600GZ, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
Workqueue: test-ww_mutex stress_inorder_work
Call Trace:
dump_stack+0x86/0xc3
? stress_inorder_work+0x15a/0x330
print_unlock_imbalance_bug+0xd8/0xe0
? stress_inorder_work+0x15a/0x330
lock_release+0x374/0x4b0
? schedule_hrtimeout_range_clock+0xda/0x270
__mutex_unlock_slowpath+0x43/0x2e0
ww_mutex_unlock+0x47/0xb0
stress_inorder_work+0x15a/0x330
? process_one_work+0x26d/0x710
process_one_work+0x26d/0x710
? process_one_work+0x1f2/0x710
worker_thread+0x48/0x4e0
kthread+0x101/0x140
? process_one_work+0x710/0x710
? kthread_create_on_node+0x40/0x40
ret_from_fork+0x31/0x40
INFO: task swapper/0:1 blocked for more than 120 seconds.
Not tainted 4.10.0-rc4-00122-g6edad167-dirty #608
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
swapper/0 D 0 1 0 0x00000000
Call Trace:
__schedule+0x30a/0xc60
schedule+0x40/0x90
schedule_timeout+0x30a/0x670
? wait_for_completion+0x23/0x100
? trace_hardirqs_on_caller+0x100/0x1c0
wait_for_completion+0x97/0x100
? wake_up_q+0x70/0x70
flush_workqueue+0x1f6/0x6b0
? flush_workqueue+0xe8/0x6b0
stress+0x1b3/0x203
test_ww_mutex_init+0x30a/0x324
? __pv_init_lock_hash+0x59/0x59
? set_debug_rodata+0x12/0x12
do_one_initcall+0x43/0x180
? set_debug_rodata+0x12/0x12
kernel_init_freeable+0x1d9/0x266
? rest_init+0x140/0x140
kernel_init+0xe/0x100
ret_from_fork+0x31/0x40
INFO: lockdep is turned off. +
Powered by blists - more mailing lists