[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170522102454.GA30331@kroah.com>
Date: Mon, 22 May 2017 12:24:54 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Vegard Nossum <vegard.nossum@...cle.com>,
Jiri Slaby <jslaby@...e.com>, linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: [linux-next / tty] possible circular locking dependency detected
On Mon, May 22, 2017 at 04:39:43PM +0900, Sergey Senozhatsky wrote:
> Hello,
>
> [ 1274.378287] ======================================================
> [ 1274.378289] WARNING: possible circular locking dependency detected
> [ 1274.378290] 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317 Not tainted
> [ 1274.378291] ------------------------------------------------------
> [ 1274.378293] kworker/u8:5/111 is trying to acquire lock:
> [ 1274.378294] (&buf->lock){+.+...}, at: [<ffffffff812f2831>] tty_buffer_flush+0x34/0x88
> [ 1274.378300]
> but task is already holding lock:
> [ 1274.378301] (&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff812ee5c7>] isig+0x47/0xd2
> [ 1274.378307]
> which lock already depends on the new lock.
>
> [ 1274.378309]
> the existing dependency chain (in reverse order) is:
> [ 1274.378310]
> -> #2 (&o_tty->termios_rwsem/1){++++..}:
> [ 1274.378316] lock_acquire+0x183/0x1ae
> [ 1274.378319] down_read+0x3e/0x62
> [ 1274.378321] n_tty_write+0x6c/0x3d6
> [ 1274.378322] tty_write+0x1cc/0x25f
> [ 1274.378325] __vfs_write+0x26/0xec
> [ 1274.378327] vfs_write+0xe1/0x16a
> [ 1274.378329] SyS_write+0x51/0x8e
> [ 1274.378330] entry_SYSCALL_64_fastpath+0x18/0xad
> [ 1274.378331]
> -> #1 (&tty->atomic_write_lock){+.+.+.}:
> [ 1274.378335] lock_acquire+0x183/0x1ae
> [ 1274.378337] __mutex_lock+0x95/0x7ba
> [ 1274.378339] mutex_lock_nested+0x1b/0x1d
> [ 1274.378340] tty_port_default_receive_buf+0x4e/0x81
> [ 1274.378342] flush_to_ldisc+0x87/0xa1
> [ 1274.378345] process_one_work+0x2be/0x52b
> [ 1274.378346] worker_thread+0x1f3/0x2c5
> [ 1274.378349] kthread+0x131/0x139
> [ 1274.378350] ret_from_fork+0x2e/0x40
> [ 1274.378351]
> -> #0 (&buf->lock){+.+...}:
> [ 1274.378355] __lock_acquire+0xec4/0x1444
> [ 1274.378357] lock_acquire+0x183/0x1ae
> [ 1274.378358] __mutex_lock+0x95/0x7ba
> [ 1274.378360] mutex_lock_nested+0x1b/0x1d
> [ 1274.378362] tty_buffer_flush+0x34/0x88
> [ 1274.378364] pty_flush_buffer+0x27/0x70
> [ 1274.378366] tty_driver_flush_buffer+0x1b/0x1e
> [ 1274.378367] isig+0x9b/0xd2
> [ 1274.378369] n_tty_receive_signal_char+0x1c/0x59
> [ 1274.378371] n_tty_receive_char_special+0xa4/0x740
> [ 1274.378373] n_tty_receive_buf_common+0x452/0x810
> [ 1274.378374] n_tty_receive_buf2+0x14/0x16
> [ 1274.378376] tty_ldisc_receive_buf+0x1f/0x4a
> [ 1274.378377] tty_port_default_receive_buf+0x5f/0x81
> [ 1274.378379] flush_to_ldisc+0x87/0xa1
> [ 1274.378380] process_one_work+0x2be/0x52b
> [ 1274.378382] worker_thread+0x1f3/0x2c5
> [ 1274.378383] kthread+0x131/0x139
> [ 1274.378385] ret_from_fork+0x2e/0x40
> [ 1274.378386]
> other info that might help us debug this:
>
> [ 1274.378387] Chain exists of:
> &buf->lock --> &tty->atomic_write_lock --> &o_tty->termios_rwsem/1
>
> [ 1274.378392] Possible unsafe locking scenario:
>
> [ 1274.378393] CPU0 CPU1
> [ 1274.378394] ---- ----
> [ 1274.378394] lock(&o_tty->termios_rwsem/1);
> [ 1274.378397] lock(&tty->atomic_write_lock);
> [ 1274.378399] lock(&o_tty->termios_rwsem/1);
> [ 1274.378402] lock(&buf->lock);
> [ 1274.378403]
> *** DEADLOCK ***
>
> [ 1274.378405] 6 locks held by kworker/u8:5/111:
> [ 1274.378406] #0: ("events_unbound"){.+.+.+}, at: [<ffffffff81058c02>] process_one_work+0x163/0x52b
> [ 1274.378410] #1: ((&buf->work)){+.+...}, at: [<ffffffff81058c02>] process_one_work+0x163/0x52b
> [ 1274.378414] #2: (&port->buf.lock/1){+.+...}, at: [<ffffffff812f2550>] flush_to_ldisc+0x25/0xa1
> [ 1274.378419] #3: (&tty->ldisc_sem){++++.+}, at: [<ffffffff812f1e98>] tty_ldisc_ref+0x1f/0x41
> [ 1274.378423] #4: (&tty->atomic_write_lock){+.+.+.}, at: [<ffffffff812f2c7e>] tty_port_default_receive_buf+0x4e/0x81
> [ 1274.378427] #5: (&o_tty->termios_rwsem/1){++++..}, at: [<ffffffff812ee5c7>] isig+0x47/0xd2
> [ 1274.378431]
> stack backtrace:
> [ 1274.378434] CPU: 1 PID: 111 Comm: kworker/u8:5 Not tainted 4.12.0-rc1-next-20170522-dbg-00007-gc09b2ab28b74-dirty #1317
> [ 1274.378437] Workqueue: events_unbound flush_to_ldisc
> [ 1274.378439] Call Trace:
> [ 1274.378443] dump_stack+0x70/0x9a
> [ 1274.378445] print_circular_bug+0x272/0x280
> [ 1274.378447] __lock_acquire+0xec4/0x1444
> [ 1274.378450] lock_acquire+0x183/0x1ae
> [ 1274.378452] ? lock_acquire+0x183/0x1ae
> [ 1274.378453] ? tty_buffer_flush+0x34/0x88
> [ 1274.378455] __mutex_lock+0x95/0x7ba
> [ 1274.378457] ? tty_buffer_flush+0x34/0x88
> [ 1274.378459] ? isig+0x64/0xd2
> [ 1274.378460] ? tty_buffer_flush+0x34/0x88
> [ 1274.378462] ? find_held_lock+0x31/0x77
> [ 1274.378464] mutex_lock_nested+0x1b/0x1d
> [ 1274.378466] ? mutex_lock_nested+0x1b/0x1d
> [ 1274.378468] tty_buffer_flush+0x34/0x88
> [ 1274.378470] pty_flush_buffer+0x27/0x70
> [ 1274.378472] tty_driver_flush_buffer+0x1b/0x1e
> [ 1274.378473] isig+0x9b/0xd2
> [ 1274.378475] n_tty_receive_signal_char+0x1c/0x59
> [ 1274.378477] n_tty_receive_char_special+0xa4/0x740
> [ 1274.378479] n_tty_receive_buf_common+0x452/0x810
> [ 1274.378481] ? lock_acquire+0x183/0x1ae
> [ 1274.378484] n_tty_receive_buf2+0x14/0x16
> [ 1274.378485] tty_ldisc_receive_buf+0x1f/0x4a
> [ 1274.378487] tty_port_default_receive_buf+0x5f/0x81
> [ 1274.378489] flush_to_ldisc+0x87/0xa1
> [ 1274.378491] process_one_work+0x2be/0x52b
> [ 1274.378493] worker_thread+0x1f3/0x2c5
> [ 1274.378495] ? rescuer_thread+0x2ca/0x2ca
> [ 1274.378497] kthread+0x131/0x139
> [ 1274.378498] ? kthread_create_on_node+0x3f/0x3f
> [ 1274.378500] ret_from_fork+0x2e/0x40
Any hint as to what you were doing when this happened?
Does this also show up in 4.11?
thanks,
greg k-h
Powered by blists - more mailing lists