[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20170530205330.GW22219@n2100.armlinux.org.uk>
Date: Tue, 30 May 2017 21:53:30 +0100
From: Russell King - ARM Linux <linux@...linux.org.uk>
To: linux-kernel@...r.kernel.org, Greg KH <greg@...ah.com>
Subject: 4.12-rc3: tty lockdep splat
Reproducing this bug is rather easy: ssh in to a target machine
running 4.12-rc3 with lockdep enabled, wait for the shell prompt,
and hit ^C. Works every time for me.
======================================================
WARNING: possible circular locking dependency detected
4.12.0-rc3+ #213 Not tainted
------------------------------------------------------
kworker/u4:0/1089 is trying to acquire lock:
(&buf->lock){+.+...}, at: [<c0387640>] tty_buffer_flush+0x44/0xc8
but task is already holding lock:
(&o_tty->termios_rwsem/1){++++..}, at: [<c0381f44>] isig+0x40/0xb4
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&o_tty->termios_rwsem/1){++++..}:
down_read+0x40/0x88
n_tty_write+0x98/0x488
tty_write+0x19c/0x2a4
__vfs_write+0x34/0x11c
vfs_write+0xac/0x16c
SyS_write+0x44/0x90
ret_fast_syscall+0x0/0x1c
-> #1 (&tty->atomic_write_lock){+.+.+.}:
__mutex_lock+0x58/0x930
mutex_lock_nested+0x24/0x2c
tty_port_default_receive_buf+0x44/0x80
flush_to_ldisc+0xac/0xc4
process_one_work+0x19c/0x40c
worker_thread+0x30/0x4c8
kthread+0x124/0x15c
ret_from_fork+0x14/0x24
-> #0 (&buf->lock){+.+...}:
lock_acquire+0x74/0x94
__mutex_lock+0x58/0x930
mutex_lock_nested+0x24/0x2c
tty_buffer_flush+0x44/0xc8
pty_flush_buffer+0x28/0x6c
tty_driver_flush_buffer+0x20/0x24
isig+0x7c/0xb4
n_tty_receive_signal_char+0x20/0x68
n_tty_receive_char_special+0x834/0xa24
n_tty_receive_buf_common+0x7a0/0x9d4
n_tty_receive_buf2+0x1c/0x24
tty_ldisc_receive_buf+0x28/0x64
tty_port_default_receive_buf+0x58/0x80
flush_to_ldisc+0xac/0xc4
process_one_work+0x19c/0x40c
worker_thread+0x30/0x4c8
kthread+0x124/0x15c
ret_from_fork+0x14/0x24
other info that might help us debug this:
Chain exists of:
&buf->lock --> &tty->atomic_write_lock --> &o_tty->termios_rwsem/1
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&o_tty->termios_rwsem/1);
lock(&tty->atomic_write_lock);
lock(&o_tty->termios_rwsem/1);
lock(&buf->lock);
*** DEADLOCK ***
6 locks held by kworker/u4:0/1089:
#0: ("events_unbound"){.+.+.+}, at: [<c0049c30>] process_one_work+0x128/0x40c
#1: ((&buf->work)){+.+...}, at: [<c0049c30>] process_one_work+0x128/0x40c
#2: (&port->buf.lock/1){+.+...}, at: [<c03874b0>] flush_to_ldisc+0x24/0xc4
#3: (&tty->ldisc_sem){++++.+}, at: [<c03867a4>] tty_ldisc_ref+0x1c/0x50
#4: (&tty->atomic_write_lock){+.+.+.}, at: [<c03881f8>] tty_port_default_receive_buf+0x44/0x80
#5: (&o_tty->termios_rwsem/1){++++..}, at: [<c0381f44>] isig+0x40/0xb4
stack backtrace:
CPU: 0 PID: 1089 Comm: kworker/u4:0 Not tainted 4.12.0-rc3+ #213
Hardware name: Marvell Armada 380/385 (Device Tree)
Workqueue: events_unbound flush_to_ldisc
Backtrace:
[<c001361c>] (dump_backtrace) from [<c0013a2c>] (show_stack+0x18/0x1c)
r6:60000093 r5:ffffffff r4:00000000 r3:00040c00
[<c0013a14>] (show_stack) from [<c02e6a08>] (dump_stack+0xa4/0xdc)
[<c02e6964>] (dump_stack) from [<c00ef85c>] (print_circular_bug+0x28c/0x2e0)
r6:c0b56bc4 r5:c0b6dc74 r4:c0b4b944 r3:c099c4c0
[<c00ef5d0>] (print_circular_bug) from [<c007bcf4>] (__lock_acquire+0x1620/0x1830)
r10:00000000 r8:00000006 r7:ed9fe968 r6:c09cb054 r5:ed9fe400 r4:c1207d5c
[<c007a6d4>] (__lock_acquire) from [<c007c654>] (lock_acquire+0x74/0x94)
r10:00000002 r9:ede48400 r8:c1207d5c r7:00000001 r6:00000001 r5:60000013
r4:00000000
[<c007c5e0>] (lock_acquire) from [<c071395c>] (__mutex_lock+0x58/0x930)
r7:ede83a24 r6:00000000 r5:00000000 r4:ede83a24
[<c0713904>] (__mutex_lock) from [<c07142dc>] (mutex_lock_nested+0x24/0x2c)
r10:ede48800 r9:ede48400 r8:00000000 r7:ede83a24 r6:ede83a68 r5:ede48400
r4:ede83a00
[<c07142b8>] (mutex_lock_nested) from [<c0387640>] (tty_buffer_flush+0x44/0xc8)
[<c03875fc>] (tty_buffer_flush) from [<c038a318>] (pty_flush_buffer+0x28/0x6c)
r10:00000002 r8:f10da29c r7:ede48530 r6:00000000 r5:ede48400 r4:ede48800
[<c038a2f0>] (pty_flush_buffer) from [<c0384c00>] (tty_driver_flush_buffer+0x20/0x24)
r6:00000000 r5:f10d8000 r4:ede48400 r3:c038a2f0
[<c0384be0>] (tty_driver_flush_buffer) from [<c0381f80>] (isig+0x7c/0xb4)
[<c0381f04>] (isig) from [<c0382d18>] (n_tty_receive_signal_char+0x20/0x68)
r10:00000000 r8:00000000 r7:00000000 r6:00000003 r5:00000003 r4:ede48400
[<c0382cf8>] (n_tty_receive_signal_char) from [<c0383594>] (n_tty_receive_char_special+0x834/0xa24)
r5:f10d8000 r4:ede48400
[<c0382d60>] (n_tty_receive_char_special) from [<c0383f24>] (n_tty_receive_buf_common+0x7a0/0x9d4)
r10:00000000 r9:ede48400 r8:00000000 r7:00000000 r6:edf18c1f r5:00000001
r4:edf18c1e
[<c0383784>] (n_tty_receive_buf_common) from [<c0384174>] (n_tty_receive_buf2+0x1c/0x24)
r10:00000001 r9:00000000 r8:edf18c1e r7:00000000 r6:00000001 r5:c09d5238
r4:00000001
[<c0384158>] (n_tty_receive_buf2) from [<c0386f8c>] (tty_ldisc_receive_buf+0x28/0x64)
[<c0386f64>] (tty_ldisc_receive_buf) from [<c038820c>] (tty_port_default_receive_buf+0x58/0x80)
r5:ed995fc0 r4:ede48464
[<c03881b4>] (tty_port_default_receive_buf) from [<c0387538>] (flush_to_ldisc+0xac/0xc4)
r8:ef005000 r7:ede83c24 r6:ede83c00 r5:ede83c04 r4:edf18c00 r3:00000001
[<c038748c>] (flush_to_ldisc) from [<c0049ca4>] (process_one_work+0x19c/0x40c)
r7:ed923f08 r6:ef008000 r5:ede83c04 r4:edf99100
[<c0049b08>] (process_one_work) from [<c0049f84>] (worker_thread+0x30/0x4c8)
r10:c09ab900 r9:ef008000 r8:ef008000 r7:00000088 r6:edf99118 r5:ef008034
r4:edf99100
[<c0049f54>] (worker_thread) from [<c0050c08>] (kthread+0x124/0x15c)
r10:c0049f54 r9:ef383e4c r8:edf99100 r7:edf990b8 r6:ed88d140 r5:00000000
r4:edf99080
[<c0050ae4>] (kthread) from [<c000ff30>] (ret_from_fork+0x14/0x24)
r10:00000000 r9:00000000 r8:00000000 r7:00000000 r6:00000000 r5:c0050ae4
r4:ed88d140 r3:ed922000
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
Powered by blists - more mailing lists