lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CACT4Y+YO8j-QY4Drfe1M80R6Dgc-_gaUWf4--0w2ndoee-dEcQ@mail.gmail.com> Date: Tue, 12 Dec 2017 17:51:06 +0100 From: Dmitry Vyukov <dvyukov@...gle.com> To: syzbot <bot+caffa2697ebe6d891ac5d7701d58644a307c470a@...kaller.appspotmail.com> Cc: LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, syzkaller-bugs@...glegroups.com, Jens Axboe <axboe@...nel.dk>, Ming Lei <tom.leiming@...il.com>, Omar Sandoval <osandov@...com>, Hannes Reinecke <hare@...e.de>, shli@...com Subject: Re: INFO: task hung in blk_mq_freeze_queue_wait On Sun, Dec 10, 2017 at 2:36 PM, syzbot <bot+caffa2697ebe6d891ac5d7701d58644a307c470a@...kaller.appspotmail.com> wrote: > Hello, > > syzkaller hit the following crash on > ad4dac17f9d563b9e34aab78a34293b10993e9b5 > git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/master > compiler: gcc (GCC) 7.1.1 20170620 > .config is attached > Raw console output is attached. > > Unfortunately, I don't have any reproducer for this bug yet. > > > INFO: task syz-executor4:10562 blocked for more than 120 seconds. > Not tainted 4.15.0-rc2-next-20171208+ #63 > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > syz-executor4 D24560 10562 3357 0x00000004 > Call Trace: > context_switch kernel/sched/core.c:2800 [inline] > __schedule+0x8eb/0x2060 kernel/sched/core.c:3376 > schedule+0xf5/0x430 kernel/sched/core.c:3435 > blk_mq_freeze_queue_wait+0x1bb/0x400 block/blk-mq.c:137 > blk_freeze_queue block/blk-mq.c:164 [inline] > blk_mq_freeze_queue+0x1d/0x20 block/blk-mq.c:173 > loop_set_status+0x1a2/0xf60 drivers/block/loop.c:1097 > loop_set_status64+0x95/0x100 drivers/block/loop.c:1271 > lo_ioctl+0xd98/0x1b90 drivers/block/loop.c:1381 > __blkdev_driver_ioctl block/ioctl.c:303 [inline] > blkdev_ioctl+0x1759/0x1e00 block/ioctl.c:601 > block_ioctl+0xea/0x130 fs/block_dev.c:1860 > vfs_ioctl fs/ioctl.c:46 [inline] > do_vfs_ioctl+0x1b1/0x1530 fs/ioctl.c:686 > SYSC_ioctl fs/ioctl.c:701 [inline] > SyS_ioctl+0x8f/0xc0 fs/ioctl.c:692 > entry_SYSCALL_64_fastpath+0x1f/0x96 > RIP: 0033:0x452a39 > RSP: 002b:00007fa74b7d6c58 EFLAGS: 00000212 ORIG_RAX: 0000000000000010 > RAX: ffffffffffffffda RBX: 0000000000758020 RCX: 0000000000452a39 > RDX: 00000000202d1000 RSI: 0000000000004c04 RDI: 0000000000000014 > RBP: 000000000000055e R08: 0000000000000000 R09: 0000000000000000 > R10: 0000000000000000 R11: 0000000000000212 R12: 00000000006f6170 > R13: 00000000ffffffff R14: 00007fa74b7d76d4 R15: 0000000000000000 > > Showing all locks held in the system: > 2 locks held by khungtaskd/671: > #0: (rcu_read_lock){....}, at: [<00000000c846f207>] > check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline] > #0: (rcu_read_lock){....}, at: [<00000000c846f207>] watchdog+0x1c5/0xd60 > kernel/hung_task.c:249 > #1: (tasklist_lock){.+.+}, at: [<000000009d31aff0>] > debug_show_all_locks+0xd3/0x400 kernel/locking/lockdep.c:4554 > 2 locks held by getty/3115: > #0: (&tty->ldisc_sem){++++}, at: [<00000000268943a7>] > ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 > #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000c938216d>] > n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131 > 2 locks held by getty/3116: > #0: (&tty->ldisc_sem){++++}, at: [<00000000268943a7>] > ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 > #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000c938216d>] > n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131 > 2 locks held by getty/3117: > #0: (&tty->ldisc_sem){++++}, at: [<00000000268943a7>] > ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 > #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000c938216d>] > n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131 > 2 locks held by getty/3118: > #0: (&tty->ldisc_sem){++++}, at: [<00000000268943a7>] > ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 > #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000c938216d>] > n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131 > 2 locks held by getty/3119: > #0: (&tty->ldisc_sem){++++}, at: [<00000000268943a7>] > ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 > #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000c938216d>] > n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131 > 2 locks held by getty/3120: > #0: (&tty->ldisc_sem){++++}, at: [<00000000268943a7>] > ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 > #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000c938216d>] > n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131 > 2 locks held by getty/3121: > #0: (&tty->ldisc_sem){++++}, at: [<00000000268943a7>] > ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 > #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000c938216d>] > n_tty_read+0x2f2/0x1a10 drivers/tty/n_tty.c:2131 > 1 lock held by syz-executor4/10562: > #0: (&lo->lo_ctl_mutex/1){+.+.}, at: [<00000000a39a9511>] > lo_ioctl+0x8b/0x1b90 drivers/block/loop.c:1355 > 1 lock held by syz-executor4/10570: > #0: (&lo->lo_ctl_mutex/1){+.+.}, at: [<00000000a39a9511>] > lo_ioctl+0x8b/0x1b90 drivers/block/loop.c:1355 > 1 lock held by syz-executor4/10577: > #0: (&lo->lo_ctl_mutex/1){+.+.}, at: [<00000000a39a9511>] > lo_ioctl+0x8b/0x1b90 drivers/block/loop.c:1355 > 1 lock held by syz-executor4/10589: > #0: (&lo->lo_ctl_mutex/1){+.+.}, at: [<00000000a39a9511>] > lo_ioctl+0x8b/0x1b90 drivers/block/loop.c:1355 > > ============================================= > > NMI backtrace for cpu 0 > CPU: 0 PID: 671 Comm: khungtaskd Not tainted 4.15.0-rc2-next-20171208+ #63 > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS > Google 01/01/2011 > Call Trace: > __dump_stack lib/dump_stack.c:17 [inline] > dump_stack+0x194/0x257 lib/dump_stack.c:53 > nmi_cpu_backtrace+0x1d2/0x210 lib/nmi_backtrace.c:103 > nmi_trigger_cpumask_backtrace+0x122/0x180 lib/nmi_backtrace.c:62 > arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38 > trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline] > check_hung_task kernel/hung_task.c:132 [inline] > check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline] > watchdog+0x90c/0xd60 kernel/hung_task.c:249 > kthread+0x37a/0x440 kernel/kthread.c:238 > ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:524 > Sending NMI from CPU 0 to CPUs 1: > NMI backtrace for cpu 1 > CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.15.0-rc2-next-20171208+ #63 > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS > Google 01/01/2011 > RIP: 0010:trace_hardirqs_on_caller+0x347/0x5c0 kernel/locking/lockdep.c:2928 > RSP: 0018:ffff8801db307b38 EFLAGS: 00000806 > RAX: 0000000000000000 RBX: 0000000000000003 RCX: 0000000000000001 > RDX: 1ffffffff0d3d899 RSI: 0000000080000001 RDI: ffffffff869ec4c8 > RBP: ffff8801db307b48 R08: 0000000000000000 R09: 0000000000000000 > R10: 0000000000000000 R11: 0000000000000000 R12: ffffffff85400237 > R13: 1ffff1003b660fb1 R14: 0000000000000080 R15: dffffc0000000000 > FS: 0000000000000000(0000) GS:ffff8801db300000(0000) knlGS:0000000000000000 > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > CR2: 00000000006dd000 CR3: 00000001ce2f2000 CR4: 00000000001406e0 > Call Trace: > <IRQ> > trace_hardirqs_on+0xd/0x10 kernel/locking/lockdep.c:2946 > __do_softirq+0x237/0xbb2 kernel/softirq.c:269 > invoke_softirq kernel/softirq.c:365 [inline] > irq_exit+0x1d3/0x210 kernel/softirq.c:405 > scheduler_ipi+0x32a/0x830 kernel/sched/core.c:1804 > smp_reschedule_interrupt+0xe6/0x670 arch/x86/kernel/smp.c:277 > reschedule_interrupt+0xa9/0xb0 arch/x86/entry/entry_64.S:944 > </IRQ> > RIP: 0010:native_safe_halt+0x6/0x10 arch/x86/include/asm/irqflags.h:54 > RSP: 0018:ffff8801d9f97da8 EFLAGS: 00000282 ORIG_RAX: ffffffffffffff02 > RAX: dffffc0000000000 RBX: 1ffff1003b3f2fb8 RCX: 0000000000000000 > RDX: 1ffffffff0c5975c RSI: 0000000000000001 RDI: ffffffff862cbae0 > RBP: ffff8801d9f97da8 R08: 0000000000000000 R09: 0000000000000000 > R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001 > R13: ffff8801d9f97e60 R14: ffffffff869efaa0 R15: 0000000000000000 > arch_safe_halt arch/x86/include/asm/paravirt.h:93 [inline] > default_idle+0xbf/0x430 arch/x86/kernel/process.c:355 > arch_cpu_idle+0xa/0x10 arch/x86/kernel/process.c:346 > default_idle_call+0x36/0x90 kernel/sched/idle.c:98 > cpuidle_idle_call kernel/sched/idle.c:156 [inline] > do_idle+0x24a/0x3b0 kernel/sched/idle.c:246 > cpu_startup_entry+0x18/0x20 kernel/sched/idle.c:351 > start_secondary+0x330/0x460 arch/x86/kernel/smpboot.c:277 > secondary_startup_64+0xa5/0xb0 arch/x86/kernel/head_64.S:237 > Code: 00 8b 3d 6d 7b 33 06 85 ff 0f 85 ea fd ff ff 48 c7 c7 c8 c4 9e 86 48 > b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 0f b6 04 02 <48> 89 fa 83 e2 > 07 38 d0 7f 08 84 c0 0f 85 2d 02 00 00 80 3d 18 +loop maintainers #syz dup: INFO: task hung in lo_ioctl > --- > This bug is generated by a dumb bot. It may contain errors. > See https://goo.gl/tpsmEJ for details. > Direct all questions to syzkaller@...glegroups.com. > Please credit me with: Reported-by: syzbot <syzkaller@...glegroups.com> > > syzbot will keep track of this bug report. > Once a fix for this bug is merged into any tree, reply to this email with: > #syz fix: exact-commit-title > To mark this as a duplicate of another syzbot report, please reply with: > #syz dup: exact-subject-of-another-report > If it's a one-off invalid bug report, please reply with: > #syz invalid > Note: if the crash happens again, it will cause creation of a new bug > report. > Note: all commands must start from beginning of the line in the email body. > > -- > You received this message because you are subscribed to the Google Groups > "syzkaller-bugs" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to syzkaller-bugs+unsubscribe@...glegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/syzkaller-bugs/f4f5e803e1a0a21766055ffc7f8f%40google.com. > For more options, visit https://groups.google.com/d/optout.
Powered by blists - more mailing lists