[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150802153807.GA1572@codemonkey.org.uk>
Date: Sun, 2 Aug 2015 11:38:07 -0400
From: Dave Jones <davej@...emonkey.org.uk>
To: Linux Kernel <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Josh Triplett <josh@...htriplett.org>
Subject: Re: unpinning an unpinned lock. (pidns/scheduler)
On Fri, Jul 31, 2015 at 01:43:53PM -0400, Dave Jones wrote:
> Just found a machine with this on 4.2-rc4
>
> WARNING: CPU: 0 PID: 11787 at kernel/locking/lockdep.c:3497 lock_unpin_lock+0x109/0x110()
> unpinning an unpinned lock
> CPU: 0 PID: 11787 Comm: kworker/0:1 Not tainted 4.2.0-rc4-think+ #5
> Workqueue: events proc_cleanup_work
> 0000000000000009 ffff8804f8983988 ffffffff9f7f5eed 0000000000000007
> ffff8804f89839d8 ffff8804f89839c8 ffffffff9f07b72a 00000000000000a8
> 0000000000000070 ffff8805079d5c98 0000000000000092 0000000000000002
> Call Trace:
> [<ffffffff9f7f5eed>] dump_stack+0x4f/0x7b
> [<ffffffff9f07b72a>] warn_slowpath_common+0x8a/0xc0
> [<ffffffff9f07b7a6>] warn_slowpath_fmt+0x46/0x50
> [<ffffffff9f0d0c59>] lock_unpin_lock+0x109/0x110
> [<ffffffff9f7f944f>] __schedule+0x39f/0xb30
> [<ffffffff9f7f9ca1>] schedule+0x41/0x90
> [<ffffffff9f7fe88f>] schedule_timeout+0x33f/0x5b0
> [<ffffffff9f0cfdfe>] ? put_lock_stats.isra.29+0xe/0x30
> [<ffffffff9f0d33d5>] ? mark_held_locks+0x75/0xa0
> [<ffffffff9f7ffb70>] ? _raw_spin_unlock_irq+0x30/0x60
> [<ffffffff9f0ad5e1>] ? get_parent_ip+0x11/0x50
> [<ffffffff9f7fb16c>] wait_for_completion+0xec/0x120
> [<ffffffff9f0abfc0>] ? wake_up_q+0x70/0x70
> [<ffffffff9f0f38f0>] ? rcu_barrier+0x20/0x20
> [<ffffffff9f0ea3f8>] wait_rcu_gp+0x68/0x90
> [<ffffffff9f0ea370>] ? trace_raw_output_rcu_barrier+0x80/0x80
> [<ffffffff9f7fb0b8>] ? wait_for_completion+0x38/0x120
> [<ffffffff9f0ee4dc>] synchronize_rcu+0x3c/0xb0
> [<ffffffff9f21de3f>] kern_unmount+0x2f/0x40
> [<ffffffff9f26dca5>] pid_ns_release_proc+0x15/0x20
> [<ffffffff9f1354b5>] proc_cleanup_work+0x15/0x20
> [<ffffffff9f0993f3>] process_one_work+0x1f3/0x7a0
> [<ffffffff9f099362>] ? process_one_work+0x162/0x7a0
> [<ffffffff9f099a99>] ? worker_thread+0xf9/0x470
> [<ffffffff9f099a09>] worker_thread+0x69/0x470
> [<ffffffff9f0ad773>] ? preempt_count_sub+0xa3/0xf0
> [<ffffffff9f0999a0>] ? process_one_work+0x7a0/0x7a0
> [<ffffffff9f09fbbf>] kthread+0x11f/0x140
> [<ffffffff9f09faa0>] ? kthread_create_on_node+0x250/0x250
> [<ffffffff9f80098f>] ret_from_fork+0x3f/0x70
> [<ffffffff9f09faa0>] ? kthread_create_on_node+0x250/0x250
> ---[ end trace e75342db87128aeb ]---
I'm hitting this a few times a day now, I'll see if I can narrow down
a reproducer next week. Adding the RCU cabal to Cc.
Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists