[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAeHK+zOQRExvWzZQKpHhsdOnVah=zDsBdL2Q_SpamL1ttv4gg@mail.gmail.com>
Date: Mon, 29 May 2017 17:19:31 +0200
From: Andrey Konovalov <andreyknvl@...gle.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jiri Slaby <jslaby@...e.com>,
LKML <linux-kernel@...r.kernel.org>
Cc: Dmitry Vyukov <dvyukov@...gle.com>,
Kostya Serebryany <kcc@...gle.com>,
syzkaller <syzkaller@...glegroups.com>
Subject: tty: possible deadlock in tty_buffer_flush
Hi,
I've got the following error report while fuzzing the kernel with syzkaller.
On commit 5ed02dbb497422bf225783f46e6eadd237d23d6b (4.12-rc3).
======================================================
WARNING: possible circular locking dependency detected
4.12.0-rc3+ #369 Not tainted
------------------------------------------------------
kworker/u9:1/31 is trying to acquire lock:
(&buf->lock){+.+...}, at: [<ffffffff823dd42b>]
tty_buffer_flush+0xbb/0x3b0 drivers/tty/tty_buffer.c:221
but task is already holding lock:
(&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff823cd5d1>]
isig+0xa1/0x4d0 drivers/tty/n_tty.c:1100
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&o_tty->termios_rwsem/1){++++..}:
validate_chain kernel/locking/lockdep.c:2281 [inline]
__lock_acquire+0x220c/0x3690 kernel/locking/lockdep.c:3367
lock_acquire+0x22d/0x560 kernel/locking/lockdep.c:3855
down_read+0x96/0x160 kernel/locking/rwsem.c:23
n_tty_write+0x29a/0xfb0 drivers/tty/n_tty.c:2287
do_tty_write drivers/tty/tty_io.c:887 [inline]
tty_write+0x3eb/0x8a0 drivers/tty/tty_io.c:971
__vfs_write+0x5d5/0x760 fs/read_write.c:508
vfs_write+0x187/0x500 fs/read_write.c:558
SYSC_write fs/read_write.c:605 [inline]
SyS_write+0xfb/0x230 fs/read_write.c:597
entry_SYSCALL_64_fastpath+0x1f/0xbe
-> #1 (&tty->atomic_write_lock){+.+.+.}:
validate_chain kernel/locking/lockdep.c:2281 [inline]
__lock_acquire+0x220c/0x3690 kernel/locking/lockdep.c:3367
lock_acquire+0x22d/0x560 kernel/locking/lockdep.c:3855
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0x18b/0x1900 kernel/locking/mutex.c:893
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
tty_port_default_receive_buf+0x109/0x190 drivers/tty/tty_port.c:37
receive_buf drivers/tty/tty_buffer.c:448 [inline]
flush_to_ldisc+0x3e3/0x5b0 drivers/tty/tty_buffer.c:497
process_one_work+0xc03/0x1bd0 kernel/workqueue.c:2097
worker_thread+0x223/0x1860 kernel/workqueue.c:2231
kthread+0x35e/0x430 kernel/kthread.c:231
ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:424
-> #0 (&buf->lock){+.+...}:
check_prev_add kernel/locking/lockdep.c:1844 [inline]
check_prevs_add+0xadb/0x1b40 kernel/locking/lockdep.c:1954
validate_chain kernel/locking/lockdep.c:2281 [inline]
__lock_acquire+0x220c/0x3690 kernel/locking/lockdep.c:3367
lock_acquire+0x22d/0x560 kernel/locking/lockdep.c:3855
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0x18b/0x1900 kernel/locking/mutex.c:893
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
tty_buffer_flush+0xbb/0x3b0 drivers/tty/tty_buffer.c:221
pty_flush_buffer+0x50/0x160 drivers/tty/pty.c:223
tty_driver_flush_buffer+0x65/0x80 drivers/tty/tty_ioctl.c:94
isig+0x16f/0x4d0 drivers/tty/n_tty.c:1111
n_tty_receive_signal_char+0x22/0xf0 drivers/tty/n_tty.c:1212
n_tty_receive_char_special+0xc5a/0x2c80 drivers/tty/n_tty.c:1264
n_tty_receive_buf_fast drivers/tty/n_tty.c:1579 [inline]
__receive_buf drivers/tty/n_tty.c:1613 [inline]
n_tty_receive_buf_common+0x1abc/0x2630 drivers/tty/n_tty.c:1711
n_tty_receive_buf2+0x33/0x40 drivers/tty/n_tty.c:1746
tty_ldisc_receive_buf+0x2e9/0x420 drivers/tty/tty_buffer.c:429
tty_port_default_receive_buf+0x122/0x190 drivers/tty/tty_port.c:38
receive_buf drivers/tty/tty_buffer.c:448 [inline]
flush_to_ldisc+0x3e3/0x5b0 drivers/tty/tty_buffer.c:497
process_one_work+0xc03/0x1bd0 kernel/workqueue.c:2097
worker_thread+0x223/0x1860 kernel/workqueue.c:2231
kthread+0x35e/0x430 kernel/kthread.c:231
ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:424
other info that might help us debug this:
Chain exists of:
&buf->lock --> &tty->atomic_write_lock --> &o_tty->termios_rwsem/1
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&o_tty->termios_rwsem/1);
lock(&tty->atomic_write_lock);
lock(&o_tty->termios_rwsem/1);
lock(&buf->lock);
*** DEADLOCK ***
6 locks held by kworker/u9:1/31:
#0: ("events_unbound"){.+.+.+}, at: [<ffffffff8132fdaf>]
__write_once_size include/linux/compiler.h:283 [inline]
#0: ("events_unbound"){.+.+.+}, at: [<ffffffff8132fdaf>]
atomic64_set arch/x86/include/asm/atomic64_64.h:33 [inline]
#0: ("events_unbound"){.+.+.+}, at: [<ffffffff8132fdaf>]
atomic_long_set include/asm-generic/atomic-long.h:56 [inline]
#0: ("events_unbound"){.+.+.+}, at: [<ffffffff8132fdaf>]
set_work_data kernel/workqueue.c:617 [inline]
#0: ("events_unbound"){.+.+.+}, at: [<ffffffff8132fdaf>]
set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline]
#0: ("events_unbound"){.+.+.+}, at: [<ffffffff8132fdaf>]
process_one_work+0xaef/0x1bd0 kernel/workqueue.c:2090
#1: ((&buf->work)){+.+...}, at: [<ffffffff8132fe02>]
process_one_work+0xb42/0x1bd0 kernel/workqueue.c:2094
#2: (&port->buf.lock/1){+.+...}, at: [<ffffffff823db680>]
flush_to_ldisc+0xb0/0x5b0 drivers/tty/tty_buffer.c:469
#3: (&tty->ldisc_sem){++++++}, at: [<ffffffff823d932b>]
tty_ldisc_ref+0x1b/0x80 drivers/tty/tty_ldisc.c:297
#4: (&tty->atomic_write_lock){+.+.+.}, at: [<ffffffff823de4c9>]
tty_port_default_receive_buf+0x109/0x190 drivers/tty/tty_port.c:37
#5: (&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff823cd5d1>]
isig+0xa1/0x4d0 drivers/tty/n_tty.c:1100
stack backtrace:
CPU: 0 PID: 31 Comm: kworker/u9:1 Not tainted 4.12.0-rc3+ #369
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Workqueue: events_unbound flush_to_ldisc
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x292/0x395 lib/dump_stack.c:52
print_circular_bug+0x310/0x3c0 kernel/locking/lockdep.c:1218
check_prev_add kernel/locking/lockdep.c:1844 [inline]
check_prevs_add+0xadb/0x1b40 kernel/locking/lockdep.c:1954
validate_chain kernel/locking/lockdep.c:2281 [inline]
__lock_acquire+0x220c/0x3690 kernel/locking/lockdep.c:3367
lock_acquire+0x22d/0x560 kernel/locking/lockdep.c:3855
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0x18b/0x1900 kernel/locking/mutex.c:893
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
tty_buffer_flush+0xbb/0x3b0 drivers/tty/tty_buffer.c:221
pty_flush_buffer+0x50/0x160 drivers/tty/pty.c:223
tty_driver_flush_buffer+0x65/0x80 drivers/tty/tty_ioctl.c:94
isig+0x16f/0x4d0 drivers/tty/n_tty.c:1111
n_tty_receive_signal_char+0x22/0xf0 drivers/tty/n_tty.c:1212
n_tty_receive_char_special+0xc5a/0x2c80 drivers/tty/n_tty.c:1264
n_tty_receive_buf_fast drivers/tty/n_tty.c:1579 [inline]
__receive_buf drivers/tty/n_tty.c:1613 [inline]
n_tty_receive_buf_common+0x1abc/0x2630 drivers/tty/n_tty.c:1711
n_tty_receive_buf2+0x33/0x40 drivers/tty/n_tty.c:1746
tty_ldisc_receive_buf+0x2e9/0x420 drivers/tty/tty_buffer.c:429
tty_port_default_receive_buf+0x122/0x190 drivers/tty/tty_port.c:38
receive_buf drivers/tty/tty_buffer.c:448 [inline]
flush_to_ldisc+0x3e3/0x5b0 drivers/tty/tty_buffer.c:497
process_one_work+0xc03/0x1bd0 kernel/workqueue.c:2097
worker_thread+0x223/0x1860 kernel/workqueue.c:2231
kthread+0x35e/0x430 kernel/kthread.c:231
ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:424
Powered by blists - more mailing lists