[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACT4Y+bZC6pfqSyCLkMJpngQsALY1FoUPkDkNmTox51s3Zgabg@mail.gmail.com>
Date: Thu, 28 Dec 2017 14:56:50 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: David Ahern <dsahern@...il.com>
Cc: syzbot
<bot+b17f10c8a8c693b40723d40d6553fbc54d197679@...kaller.appspotmail.com>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
syzkaller-bugs@...glegroups.com,
David Miller <davem@...emloft.net>,
Florian Westphal <fw@...len.de>,
Daniel Borkmann <daniel@...earbox.net>,
Xin Long <lucien.xin@...il.com>, jakub.kicinski@...ronome.com,
mschiffer@...verse-factory.net,
Vladislav Yasevich <vyasevich@...il.com>,
Jiri Benc <jbenc@...hat.com>, netdev <netdev@...r.kernel.org>
Subject: Re: INFO: task hung in cleanup_net
On Tue, Dec 19, 2017 at 7:17 PM, David Ahern <dsahern@...il.com> wrote:
> On 12/19/17 5:47 AM, Dmitry Vyukov wrote:
>> On Tue, Dec 19, 2017 at 1:33 PM, syzbot
>> <bot+b17f10c8a8c693b40723d40d6553fbc54d197679@...kaller.appspotmail.com>
>> wrote:
>>> Hello,
>>>
>>> syzkaller hit the following crash on
>>> e40fd8d6b4d9f59b160faa1736f78fc07533ff37
>>> git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/master
>>> compiler: gcc (GCC) 7.1.1 20170620
>>> .config is attached
>>> Raw console output is attached.
>>>
>>> Unfortunately, I don't have any reproducer for this bug yet.
>>>
>>>
>>> sctp: sctp_transport_update_pmtu: Reported pmtu 508 too low, using default
>>> minimum of 512
>>> sctp: sctp_transport_update_pmtu: Reported pmtu 508 too low, using default
>>> minimum of 512
>>> sctp: sctp_transport_update_pmtu: Reported pmtu 508 too low, using default
>>> minimum of 512
>>> INFO: task kworker/u4:11:3785 blocked for more than 120 seconds.
>>> Not tainted 4.15.0-rc2-next-20171207+ #61
>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> kworker/u4:11 D16448 3785 2 0x80000000
>>> Workqueue: netns cleanup_net
>>> Call Trace:
>>> context_switch kernel/sched/core.c:2800 [inline]
>>> __schedule+0x8eb/0x2060 kernel/sched/core.c:3376
>>> schedule+0xf5/0x430 kernel/sched/core.c:3435
>>> schedule_preempt_disabled+0x10/0x20 kernel/sched/core.c:3493
>>> __mutex_lock_common kernel/locking/mutex.c:833 [inline]
>>> __mutex_lock+0xaad/0x1a80 kernel/locking/mutex.c:893
>>> mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
>>> rtnl_lock+0x17/0x20 net/core/rtnetlink.c:74
>>> cleanup_net+0x24c/0xb60 net/core/net_namespace.c:453
>>> process_one_work+0xbfd/0x1bc0 kernel/workqueue.c:2113
>>> worker_thread+0x223/0x1990 kernel/workqueue.c:2247
>>> kthread+0x37a/0x440 kernel/kthread.c:238
>>> ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:524
>>>
>>> Showing all locks held in the system:
>>> 2 locks held by khungtaskd/672:
>>> #0: (rcu_read_lock){....}, at: [<00000000b9bef8ab>]
>>> check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline]
>>> #0: (rcu_read_lock){....}, at: [<00000000b9bef8ab>] watchdog+0x1c5/0xd60
>>> kernel/hung_task.c:249
>>> #1: (tasklist_lock){.+.+}, at: [<000000006d6acf8b>]
>>> debug_show_all_locks+0xd3/0x400 kernel/locking/lockdep.c:4554
>>> 1 lock held by rsyslogd/2970:
>>> #0: (&f->f_pos_lock){+.+.}, at: [<000000001cede688>]
>>> __fdget_pos+0x131/0x1a0 fs/file.c:765
>>> 2 locks held by getty/3052:
>>> #0: (&tty->ldisc_sem){++++}, at: [<000000004e0245ba>]
>>> ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
>>> #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000ff863b18>]
>>> n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131
>>> 2 locks held by getty/3053:
>>> #0: (&tty->ldisc_sem){++++}, at: [<000000004e0245ba>]
>>> ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
>>> #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000ff863b18>]
>>> n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131
>>> 2 locks held by getty/3054:
>>> #0: (&tty->ldisc_sem){++++}, at: [<000000004e0245ba>]
>>> ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
>>> #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000ff863b18>]
>>> n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131
>>> 2 locks held by getty/3055:
>>> #0: (&tty->ldisc_sem){++++}, at: [<000000004e0245ba>]
>>> ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
>>> #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000ff863b18>]
>>> n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131
>>> 2 locks held by getty/3056:
>>> #0: (&tty->ldisc_sem){++++}, at: [<000000004e0245ba>]
>>> ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
>>> #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000ff863b18>]
>>> n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131
>>> 2 locks held by getty/3057:
>>> #0: (&tty->ldisc_sem){++++}, at: [<000000004e0245ba>]
>>> ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
>>> #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000ff863b18>]
>>> n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131
>>> 2 locks held by getty/3058:
>>> #0: (&tty->ldisc_sem){++++}, at: [<000000004e0245ba>]
>>> ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
>>> #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000ff863b18>]
>>> n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131
>>> 1 lock held by syz-executor2/3320:
>>> #0: (rtnl_mutex){+.+.}, at: [<0000000063c47ced>] rtnl_lock+0x17/0x20
>>> net/core/rtnetlink.c:74
>>> 4 locks held by kworker/u4:11/3785:
>>> #0: ((wq_completion)"%s""netns"){+.+.}, at: [<0000000087d81703>]
>>> __write_once_size include/linux/compiler.h:212 [inline]
>>> #0: ((wq_completion)"%s""netns"){+.+.}, at: [<0000000087d81703>]
>>> atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
>>> #0: ((wq_completion)"%s""netns"){+.+.}, at: [<0000000087d81703>]
>>> atomic_long_set include/asm-generic/atomic-long.h:57 [inline]
>>> #0: ((wq_completion)"%s""netns"){+.+.}, at: [<0000000087d81703>]
>>> set_work_data kernel/workqueue.c:619 [inline]
>>> #0: ((wq_completion)"%s""netns"){+.+.}, at: [<0000000087d81703>]
>>> set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
>>> #0: ((wq_completion)"%s""netns"){+.+.}, at: [<0000000087d81703>]
>>> process_one_work+0xad4/0x1bc0 kernel/workqueue.c:2084
>>> #1: (net_cleanup_work){+.+.}, at: [<000000008a817b5c>]
>>> process_one_work+0xb2f/0x1bc0 kernel/workqueue.c:2088
>>> #2: (net_mutex){+.+.}, at: [<0000000079150fd8>] cleanup_net+0x247/0xb60
>>> net/core/net_namespace.c:450
>>> #3: (rtnl_mutex){+.+.}, at: [<0000000063c47ced>] rtnl_lock+0x17/0x20
>>> net/core/rtnetlink.c:74
>>> 3 locks held by kworker/0:6/14978:
>>> #0: ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<0000000087d81703>]
>>> __write_once_size include/linux/compiler.h:212 [inline]
>>> #0: ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<0000000087d81703>]
>>> atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
>>> #0: ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<0000000087d81703>]
>>> atomic_long_set include/asm-generic/atomic-long.h:57 [inline]
>>> #0: ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<0000000087d81703>]
>>> set_work_data kernel/workqueue.c:619 [inline]
>>> #0: ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<0000000087d81703>]
>>> set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
>>> #0: ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<0000000087d81703>]
>>> process_one_work+0xad4/0x1bc0 kernel/workqueue.c:2084
>>> #1: ((addr_chk_work).work){+.+.}, at: [<000000008a817b5c>]
>>> process_one_work+0xb2f/0x1bc0 kernel/workqueue.c:2088
>>> #2: (rtnl_mutex){+.+.}, at: [<0000000063c47ced>] rtnl_lock+0x17/0x20
>>> net/core/rtnetlink.c:74
>>> 1 lock held by syz-executor2/15852:
>>> #0: (rtnl_mutex){+.+.}, at: [<0000000063c47ced>] rtnl_lock+0x17/0x20
>>> net/core/rtnetlink.c:74
>>> 1 lock held by syz-executor7/15958:
>>> #0: (rtnl_mutex){+.+.}, at: [<0000000063c47ced>] rtnl_lock+0x17/0x20
>>> net/core/rtnetlink.c:74
>>> 1 lock held by syz-executor7/15961:
>>> #0: (rtnl_mutex){+.+.}, at: [<0000000063c47ced>] rtnl_lock+0x17/0x20
>>> net/core/rtnetlink.c:74
>>>
>>> =============================================
>>>
>>> NMI backtrace for cpu 1
>>> CPU: 1 PID: 672 Comm: khungtaskd Not tainted 4.15.0-rc2-next-20171207+ #61
>>> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
>>> Google 01/01/2011
>>> Call Trace:
>>> __dump_stack lib/dump_stack.c:17 [inline]
>>> dump_stack+0x194/0x257 lib/dump_stack.c:53
>>> nmi_cpu_backtrace+0x1d2/0x210 lib/nmi_backtrace.c:103
>>> nmi_trigger_cpumask_backtrace+0x122/0x180 lib/nmi_backtrace.c:62
>>> arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
>>> trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline]
>>> check_hung_task kernel/hung_task.c:132 [inline]
>>> check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline]
>>> watchdog+0x90c/0xd60 kernel/hung_task.c:249
>>> kthread+0x37a/0x440 kernel/kthread.c:238
>>> ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:524
>>> Sending NMI from CPU 1 to CPUs 0:
>>> NMI backtrace for cpu 0
>>> CPU: 0 PID: 15938 Comm: syz-executor7 Not tainted 4.15.0-rc2-next-20171207+
>>> #61
>>> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
>>> Google 01/01/2011
>>> RIP: 0010:inb arch/x86/include/asm/io.h:348 [inline]
>>> RIP: 0010:io_serial_in+0x6b/0x90 drivers/tty/serial/8250/8250_port.c:434
>>> RSP: 0018:ffff8801c1c173e8 EFLAGS: 00000002
>>> RAX: dffffc0000000000 RBX: 00000000000003fd RCX: 0000000000000000
>>> RDX: 00000000000003fd RSI: ffffc90001c88000 RDI: ffffffff880fa9c0
>>> RBP: ffff8801c1c173f8 R08: 0000000000000002 R09: 000000000000000c
>>> R10: 0000000000000000 R11: ffffffff87896d20 R12: ffffffff880fa980
>>> R13: 0000000000000020 R14: fffffbfff101f577 R15: fffffbfff101f53a
>>> FS: 00007f1ef8fad700(0000) GS:ffff8801db200000(0000) knlGS:0000000000000000
>>> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> CR2: 00007fbea91b6000 CR3: 00000001d0aaa000 CR4: 00000000001406f0
>>> DR0: 0000000020001000 DR1: 0000000020001000 DR2: 0000000020001020
>>> DR3: 0000000020000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600
>>> Call Trace:
>>> serial_in drivers/tty/serial/8250/8250.h:111 [inline]
>>> wait_for_xmitr+0x93/0x1e0 drivers/tty/serial/8250/8250_port.c:2033
>>> serial8250_console_putchar+0x1f/0x60
>>> drivers/tty/serial/8250/8250_port.c:3170
>>> uart_console_write+0xac/0xe0 drivers/tty/serial/serial_core.c:1858
>>> serial8250_console_write+0x647/0xa20
>>> drivers/tty/serial/8250/8250_port.c:3236
>>> univ8250_console_write+0x5f/0x70 drivers/tty/serial/8250/8250_core.c:590
>>> call_console_drivers kernel/printk/printk.c:1574 [inline]
>>> console_unlock+0x788/0xd70 kernel/printk/printk.c:2233
>>> vprintk_emit+0x4ad/0x590 kernel/printk/printk.c:1757
>>> vprintk_default+0x28/0x30 kernel/printk/printk.c:1796
>>> vprintk_func+0x57/0xc0 kernel/printk/printk_safe.c:379
>>> printk+0xaa/0xca kernel/printk/printk.c:1829
>>> __dev_set_promiscuity+0x2a4/0x630 net/core/dev.c:6609
>>> __dev_change_flags+0x559/0x990 net/core/dev.c:6826
>>> dev_change_flags+0x88/0x140 net/core/dev.c:6886
>>> dev_ifsioc+0x60d/0x9b0 net/core/dev_ioctl.c:257
>>> dev_ioctl+0x2c2/0xf90 net/core/dev_ioctl.c:566
>>> sock_do_ioctl+0x94/0xb0 net/socket.c:971
>>> sock_ioctl+0x2c2/0x440 net/socket.c:1061
>>> vfs_ioctl fs/ioctl.c:46 [inline]
>>> do_vfs_ioctl+0x1b1/0x1530 fs/ioctl.c:686
>>> SYSC_ioctl fs/ioctl.c:701 [inline]
>>> SyS_ioctl+0x8f/0xc0 fs/ioctl.c:692
>>> entry_SYSCALL_64_fastpath+0x1f/0x96
>>> RIP: 0033:0x452a39
>>> RSP: 002b:00007f1ef8facc58 EFLAGS: 00000212 ORIG_RAX: 0000000000000010
>>> RAX: ffffffffffffffda RBX: 00007f1ef8fad700 RCX: 0000000000452a39
>>> RDX: 00000000208a3fe0 RSI: 0000000000008914 RDI: 0000000000000019
>>> RBP: 0000000000a6f880 R08: 0000000000000000 R09: 0000000000000000
>>> R10: 0000000000000000 R11: 0000000000000212 R12: 0000000000000000
>>> R13: 0000000000a6f7ff R14: 00007f1ef8fad9c0 R15: 0000000000000002
>>> Code: 24 d9 00 00 00 49 8d 7c 24 40 48 b8 00 00 00 00 00 fc ff df 48 89 fa
>>> 48 c1 ea 03 d3 e3 80 3c 02 00 75 17 41 03 5c 24 40 89 da ec <5b> 0f b6 c0 41
>>> 5c 5d c3 e8 88 ad 18 ff eb c2 e8 e1 ad 18 ff eb
>>
>>
>> This looks like +rtnetlink issue.
>
> Perhaps related to / fixed by: http://patchwork.ozlabs.org/patch/850957/
If you think so, let's tell syzbot about it:
#syz fix: net: Fix double free and memory corruption in get_net_ns_by_id()
Powered by blists - more mailing lists