[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <569FB10F.4080205@hurleysoftware.com>
Date: Wed, 20 Jan 2016 08:08:47 -0800
From: Peter Hurley <peter@...leysoftware.com>
To: Peter Zijlstra <peterz@...radead.org>,
Dmitry Vyukov <dvyukov@...gle.com>,
Jiri Slaby <jslaby@...e.com>, gnomes@...rguk.ukuu.org.uk
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
LKML <linux-kernel@...r.kernel.org>,
J Freyensee <james_p_freyensee@...ux.intel.com>,
syzkaller <syzkaller@...glegroups.com>,
Kostya Serebryany <kcc@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Sasha Levin <sasha.levin@...cle.com>,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: tty: deadlock between n_tracerouter_receivebuf and flush_to_ldisc
On 01/20/2016 05:02 AM, Peter Zijlstra wrote:
> On Wed, Dec 30, 2015 at 11:44:01AM +0100, Dmitry Vyukov wrote:
>> -> #3 (&buf->lock){+.+...}:
>> [<ffffffff813f0acf>] lock_acquire+0x19f/0x3c0 kernel/locking/lockdep.c:3585
>> [< inline >] __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:112
>> [<ffffffff85c8e790>] _raw_spin_lock_irqsave+0x50/0x70 kernel/locking/spinlock.c:159
>> [<ffffffff82b8c050>] tty_get_pgrp+0x20/0x80 drivers/tty/tty_io.c:2502
>
> So in any recent code that I look at this function tries to acquire
> tty->ctrl_lock, not buf->lock. Am I missing something ?!
Yes.
The tty locks were annotated with __lockfunc so were being elided from lockdep
stacktraces. Greg has a patch in his queue from me that removes the __lockfunc
annotation ("tty: Remove __lockfunc annotation from tty lock functions").
Unfortunately, I think syzkaller's post-processing stack trace isn't helping
either, giving the impression that the stack is still inside tty_get_pgrp().
It's not.
It's in pty_flush_buffer(), which is taking the 'other tty' buf->lock.
Looks to me like the lock inversion is caused by the tty_driver_flush_buffer()
in n_tracerouter_open()/_close(), but I need to look at this mess a little
closer.
Regards,
Peter Hurley
>> [<ffffffff82b9a09a>] __isig+0x1a/0x50 drivers/tty/n_tty.c:1112
>> [<ffffffff82b9c16e>] isig+0xae/0x2c0 drivers/tty/n_tty.c:1131
>> [<ffffffff82b9ef02>] n_tty_receive_signal_char+0x22/0xf0 drivers/tty/n_tty.c:1243
>> [<ffffffff82ba4958>] n_tty_receive_char_special+0x1278/0x2bf0 drivers/tty/n_tty.c:1289
>> [< inline >] n_tty_receive_buf_fast drivers/tty/n_tty.c:1613
>> [< inline >] __receive_buf drivers/tty/n_tty.c:1647
>> [<ffffffff82ba7ca6>] n_tty_receive_buf_common+0x19d6/0x2450 drivers/tty/n_tty.c:1745
>> [<ffffffff82ba8753>] n_tty_receive_buf2+0x33/0x40 drivers/tty/n_tty.c:1780
>> [< inline >] receive_buf drivers/tty/tty_buffer.c:450
>> [<ffffffff82bafa6f>] flush_to_ldisc+0x3bf/0x7f0 drivers/tty/tty_buffer.c:517
>> [<ffffffff8133833c>] process_one_work+0x76c/0x13e0 kernel/workqueue.c:2030
>> [<ffffffff81339093>] worker_thread+0xe3/0xe90 kernel/workqueue.c:2162
>> [<ffffffff8134b63f>] kthread+0x23f/0x2d0 drivers/block/aoe/aoecmd.c:1303
>> [<ffffffff85c8eeef>] ret_from_fork+0x3f/0x70 arch/x86/entry/entry_64.S:468
Powered by blists - more mailing lists