[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CANRm+Cyf54QaXaE4EMyDVnoJ+tWjiUS6ZcBPhNzBXTxR_4G_GA@mail.gmail.com>
Date: Wed, 31 Aug 2016 09:04:17 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: possible circular lockdep detected
[ 191.783739] ======================================================
[ 191.784194] [ INFO: possible circular locking dependency detected ]
[ 191.784194] 4.8.0-rc4+ #30 Not tainted
[ 191.784194] -------------------------------------------------------
[ 191.784194] rtkit-daemon/1983 is trying to acquire lock:
[ 191.784194] (tk_core){----..}, at: [<ffffffffb80d9807>]
enqueue_task_rt+0x2b7/0x330
[ 191.784194]
[ 191.784194] but task is already holding lock:
[ 191.784194] (&rt_b->rt_runtime_lock){-.-...}, at:
[<ffffffffb80d970b>] enqueue_task_rt+0x1bb/0x330
[ 191.784194]
[ 191.784194] which lock already depends on the new lock.
[ 191.784194]
[ 191.784194]
[ 191.784194] the existing dependency chain (in reverse order) is:
[ 191.784194]
-> #4 (&rt_b->rt_runtime_lock){-.-...}:
[ 191.784194] [<ffffffffb80ed192>] lock_acquire+0x132/0x250
[ 191.784194] [<ffffffffb88c6a0d>] _raw_spin_lock+0x3d/0x80
[ 191.784194] [<ffffffffb80d970b>] enqueue_task_rt+0x1bb/0x330
[ 191.784194] [<ffffffffb80be000>] __sched_setscheduler+0x2c0/0xb70
[ 191.784194] [<ffffffffb80be918>] _sched_setscheduler+0x68/0x70
[ 191.784194] [<ffffffffb80be933>] sched_setscheduler+0x13/0x20
[ 191.784194] [<ffffffffb816f9db>] watchdog_enable+0xab/0x1d0
[ 191.784194] [<ffffffffb80b7bb0>] smpboot_thread_fn+0xe0/0x1d0
[ 191.784194] [<ffffffffb80b3e81>] kthread+0x101/0x120
[ 191.784194] [<ffffffffb88c7a8f>] ret_from_fork+0x1f/0x40
[ 191.784194]
-> #3 (&rq->lock){-.-.-.}:
[ 191.784194] [<ffffffffb80ed192>] lock_acquire+0x132/0x250
[ 191.784194] [<ffffffffb88c6a0d>] _raw_spin_lock+0x3d/0x80
[ 191.784194] [<ffffffffb80d0dd3>] task_fork_fair+0x33/0xf0
[ 191.784194] [<ffffffffb80c15c4>] sched_fork+0x104/0x250
[ 191.828540] [<ffffffffb808ab79>] copy_process.part.32+0x709/0x1e00
[ 191.828540] [<ffffffffb808c473>] _do_fork+0xf3/0x700
[ 191.828540] [<ffffffffb808caa9>] kernel_thread+0x29/0x30
[ 191.828540] [<ffffffffb88b6f83>] rest_init+0x23/0x140
[ 191.828540] [<ffffffffb9167183>] start_kernel+0x4c1/0x4ce
[ 191.828540] [<ffffffffb91665ef>] x86_64_start_reservations+0x24/0x26
[ 191.828540] [<ffffffffb9166733>] x86_64_start_kernel+0x142/0x14f
[ 191.828540]
-> #2 (&p->pi_lock){-.-.-.}:
[ 191.828540] [<ffffffffb80ed192>] lock_acquire+0x132/0x250
[ 191.828540] [<ffffffffb88c727d>] _raw_spin_lock_irqsave+0x6d/0xb0
[ 191.828540] [<ffffffffb80c0191>] try_to_wake_up+0x31/0x5c0
[ 191.828540] [<ffffffffb80c0735>] wake_up_process+0x15/0x20
[ 191.828540] [<ffffffffb80abae0>] create_worker+0x130/0x1a0
[ 191.828540] [<ffffffffb918ae3e>] init_workqueues+0x369/0x571
[ 191.828540] [<ffffffffb80021d0>] do_one_initcall+0x50/0x1b0
[ 191.828540] [<ffffffffb91672ac>] kernel_init_freeable+0x11c/0x261
[ 191.828540] [<ffffffffb88b70ae>] kernel_init+0xe/0x110
[ 191.828540] [<ffffffffb88c7a8f>] ret_from_fork+0x1f/0x40
[ 191.828540]
-> #1 (&(&pool->lock)->rlock#2){-.-...}:
[ 191.828540] [<ffffffffb80ed192>] lock_acquire+0x132/0x250
[ 191.828540] [<ffffffffb88c6a0d>] _raw_spin_lock+0x3d/0x80
[ 191.828540] [<ffffffffb80ab1d1>] __queue_work+0x2b1/0x5c0
[ 191.828540] [<ffffffffb80ac392>] queue_work_on+0x62/0xd0
[ 191.828540] [<ffffffffc02d1840>] pvclock_gtod_notify+0xe0/0xf0 [kvm]
[ 191.828540] [<ffffffffb80b52a9>] notifier_call_chain+0x49/0x70
[ 191.828540] [<ffffffffb80b5386>] raw_notifier_call_chain+0x16/0x20
[ 191.828540] [<ffffffffb8121488>] timekeeping_update+0xd8/0x150
[ 191.828540] [<ffffffffb8121fcf>] change_clocksource+0xaf/0x100
[ 191.828540] [<ffffffffb8153cbe>] multi_cpu_stop+0xfe/0x160
[ 191.828540] [<ffffffffb8153af4>] cpu_stopper_thread+0x74/0x100
[ 191.828540] [<ffffffffb80b7be7>] smpboot_thread_fn+0x117/0x1d0
[ 191.828540] [<ffffffffb80b3e81>] kthread+0x101/0x120
[ 191.828540] [<ffffffffb88c7a8f>] ret_from_fork+0x1f/0x40
[ 191.828540]
-> #0 (tk_core){----..}:
[ 191.828540] [<ffffffffb80ec832>] __lock_acquire+0x1672/0x18b0
[ 191.828540] [<ffffffffb80ed192>] lock_acquire+0x132/0x250
[ 191.828540] [<ffffffffb8122826>] ktime_get+0x76/0x180
[ 191.828540] [<ffffffffb80d9807>] enqueue_task_rt+0x2b7/0x330
[ 191.828540] [<ffffffffb80bef6c>] activate_task+0x5c/0xa0
[ 191.828540] [<ffffffffb80bf464>] ttwu_do_activate+0x54/0xb0
[ 191.828540] [<ffffffffb80c03bb>] try_to_wake_up+0x25b/0x5c0
[ 191.828540] [<ffffffffb80c07d2>] default_wake_function+0x12/0x20
[ 191.828540] [<ffffffffb82767f6>] pollwake+0x66/0x70
[ 191.828540] [<ffffffffb80ddb35>] __wake_up_common+0x55/0x90
[ 191.828540] [<ffffffffb80ddba8>] __wake_up_locked_key+0x18/0x20
[ 191.828540] [<ffffffffb82b560b>] eventfd_write+0xdb/0x210
[ 191.828540] [<ffffffffb825f948>] __vfs_write+0x28/0x120
[ 191.828540] [<ffffffffb8260055>] vfs_write+0xb5/0x1b0
[ 191.828540] [<ffffffffb82613e9>] SyS_write+0x49/0xa0
[ 191.828540] [<ffffffffb8003ba1>] do_syscall_64+0x81/0x220
[ 191.828540] [<ffffffffb88c7903>] return_from_SYSCALL_64+0x0/0x7a
[ 191.828540]
[ 191.828540] other info that might help us debug this:
[ 191.828540]
[ 191.828540] Chain exists of:
tk_core --> &rq->lock --> &rt_b->rt_runtime_lock
[ 191.828540] Possible unsafe locking scenario:
[ 191.828540]
[ 191.828540] CPU0 CPU1
[ 191.828540] ---- ----
[ 191.828540] lock(&rt_b->rt_runtime_lock);
[ 191.828540] lock(&rq->lock);
[ 191.828540] lock(&rt_b->rt_runtime_lock);
[ 191.828540] lock(tk_core);
[ 191.828540]
[ 191.828540] *** DEADLOCK ***
[ 191.828540]
[ 191.828540] 4 locks held by rtkit-daemon/1983:
[ 191.828540] #0: (&ctx->wqh){......}, at: [<ffffffffb82b55cc>]
eventfd_write+0x9c/0x210
[ 191.828540] #1: (&p->pi_lock){-.-.-.}, at: [<ffffffffb80c0191>]
try_to_wake_up+0x31/0x5c0
[ 191.828540] #2: (&rq->lock){-.-.-.}, at: [<ffffffffb80c0398>]
try_to_wake_up+0x238/0x5c0
[ 191.828540] #3: (&rt_b->rt_runtime_lock){-.-...}, at:
[<ffffffffb80d970b>] enqueue_task_rt+0x1bb/0x330
[ 191.828540]
[ 191.828540] stack backtrace:
[ 191.828540] CPU: 2 PID: 1983 Comm: rtkit-daemon Not tainted 4.8.0-rc4+ #30
[ 191.828540] Hardware name: QEMU Standard PC (i440FX + PIIX,
1996), BIOS Bochs 01/01/2011
[ 191.828540] 0000000000000000 ffff9848adc8ba78 ffffffffb84475d9
ffffffffb9a05c90
[ 191.828540] ffffffffb9a03d90 ffff9848adc8bac0 ffffffffb81c1f72
ffff9848adc8bb00
[ 191.828540] ffff984935871ac0 0000000000000003 ffff984935872440
ffff984935871ac0
[ 191.828540] Call Trace:
[ 191.828540] [<ffffffffb84475d9>] dump_stack+0x99/0xd0
[ 191.828540] [<ffffffffb81c1f72>] print_circular_bug+0x209/0x218
[ 191.828540] [<ffffffffb80ec832>] __lock_acquire+0x1672/0x18b0
[ 191.828540] [<ffffffffb80c7418>] ? sched_clock_local+0x18/0x80
[ 191.828540] [<ffffffffb80ed192>] lock_acquire+0x132/0x250
[ 191.828540] [<ffffffffb80d9807>] ? enqueue_task_rt+0x2b7/0x330
[ 191.828540] [<ffffffffb8122826>] ktime_get+0x76/0x180
[ 191.828540] [<ffffffffb80d9807>] ? enqueue_task_rt+0x2b7/0x330
[ 191.828540] [<ffffffffb80d9807>] enqueue_task_rt+0x2b7/0x330
[ 191.828540] [<ffffffffb80bef6c>] activate_task+0x5c/0xa0
[ 191.828540] [<ffffffffb80bf464>] ttwu_do_activate+0x54/0xb0
[ 191.828540] [<ffffffffb80c03bb>] try_to_wake_up+0x25b/0x5c0
[ 191.828540] [<ffffffffb80c07d2>] default_wake_function+0x12/0x20
[ 191.828540] [<ffffffffb82767f6>] pollwake+0x66/0x70
[ 191.828540] [<ffffffffb80c07c0>] ? wake_up_q+0x80/0x80
[ 191.828540] [<ffffffffb80ddb35>] __wake_up_common+0x55/0x90
[ 191.828540] [<ffffffffb80ddba8>] __wake_up_locked_key+0x18/0x20
[ 191.828540] [<ffffffffb82b560b>] eventfd_write+0xdb/0x210
[ 191.828540] [<ffffffffb80c07c0>] ? wake_up_q+0x80/0x80
[ 191.828540] [<ffffffffb825f948>] __vfs_write+0x28/0x120
[ 191.828540] [<ffffffffb83e7b28>] ? apparmor_file_permission+0x18/0x20
[ 191.828540] [<ffffffffb83a4a6d>] ? security_file_permission+0x3d/0xc0
[ 191.828540] [<ffffffffb825fdf9>] ? rw_verify_area+0x49/0xb0
[ 191.828540] [<ffffffffb8260055>] vfs_write+0xb5/0x1b0
[ 191.828540] [<ffffffffb82613e9>] SyS_write+0x49/0xa0
[ 191.828540] [<ffffffffb8003ba1>] do_syscall_64+0x81/0x220
[ 191.828540] [<ffffffffb88c7903>] entry_SYSCALL64_slow_path+0x25/0x25
When suspend/resume kvm host machine, the kvm guest has this splat
sometimes, it can't be reproduced readily, any suggestion is a great
appreciated.
Regards,
Wanpeng Li
Powered by blists - more mailing lists