lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACT4Y+ZFfDdDT6EpJROqZGXhhyBb=kHf5z3XUFpb8v1HM4sfvA@mail.gmail.com>
Date:   Tue, 15 Nov 2016 07:31:07 +0100
From:   Dmitry Vyukov <dvyukov@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>,
        "x86@...nel.org" <x86@...nel.org>, KVM list <kvm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Steve Rutherford <srutherford@...gle.com>
Cc:     syzkaller <syzkaller@...glegroups.com>
Subject: Re: kvm: deadlock between kvm_vm_ioctl_get_dirty_log/kvm_hv_set_msr_common/kvm_create_pit

On Tue, Nov 15, 2016 at 7:27 AM, Dmitry Vyukov <dvyukov@...gle.com> wrote:
> Hello,
>
> The following program produces a deadlocked, unkillable process:
> https://gist.githubusercontent.com/dvyukov/fb7e93f6618f4eccb84d419ea6cec491/raw/a14b60250e593eb1b61f50cead41059dc49ceff2/gistfile1.txt
>
>
> # cat /proc/9362/task/*/stack
> [<ffffffff815b9be8>] __synchronize_srcu+0x2f8/0x4a0 kernel/rcu/srcu.c:448
> [<ffffffff815b9df3>] synchronize_srcu_expedited+0x13/0x20 kernel/rcu/srcu.c:510
> [<ffffffff8108891b>] kvm_io_bus_register_dev+0x2ab/0x3e0
> arch/x86/kvm/../../../virt/kvm/kvm_main.c:3559
> [<ffffffff8117f676>] kvm_create_pit+0x5c6/0x8c0 arch/x86/kvm/i8254.c:694
> [<ffffffff810df816>] kvm_arch_vm_ioctl+0x1406/0x23c0 arch/x86/kvm/x86.c:3956
> [<ffffffff81082b4a>] kvm_vm_ioctl+0x1fa/0x1a70
> arch/x86/kvm/../../../virt/kvm/kvm_main.c:3099
> [<     inline     >] vfs_ioctl fs/ioctl.c:43
> [<ffffffff81abcb44>] do_vfs_ioctl+0x1c4/0x1630 fs/ioctl.c:679
> [<     inline     >] SYSC_ioctl fs/ioctl.c:694
> [<ffffffff81abe044>] SyS_ioctl+0x94/0xc0 fs/ioctl.c:685
> [<ffffffff88143885>] entry_SYSCALL_64_fastpath+0x23/0xc6
> arch/x86/entry/entry_64.S:209
> [<ffffffffffffffff>] 0xffffffffffffffff
>
> [<ffffffff81199103>] kvm_hv_set_msr_common+0x163/0x2a30
> arch/x86/kvm/hyperv.c:1145
> [<ffffffff810bf82b>] kvm_set_msr_common+0xb0b/0x23a0 arch/x86/kvm/x86.c:2261
> [<ffffffff811cdcdd>] vmx_set_msr+0x27d/0xcb0 arch/x86/kvm/vmx.c:3149
> [<ffffffff81094879>] kvm_set_msr+0xd9/0x170 arch/x86/kvm/x86.c:1084
> [<ffffffff81094c23>] do_set_msr+0x123/0x1a0 arch/x86/kvm/x86.c:1113
> [<     inline     >] __msr_io arch/x86/kvm/x86.c:2523
> [<ffffffff810bd2b0>] msr_io+0x250/0x460 arch/x86/kvm/x86.c:2560
> [<ffffffff810d9f10>] kvm_arch_vcpu_ioctl+0x360/0x44a0 arch/x86/kvm/x86.c:3401
> [<ffffffff8107aef7>] kvm_vcpu_ioctl+0x237/0x11c0
> arch/x86/kvm/../../../virt/kvm/kvm_main.c:2710
> [<     inline     >] vfs_ioctl fs/ioctl.c:43
> [<ffffffff81abcb44>] do_vfs_ioctl+0x1c4/0x1630 fs/ioctl.c:679
> [<     inline     >] SYSC_ioctl fs/ioctl.c:694
> [<ffffffff81abe044>] SyS_ioctl+0x94/0xc0 fs/ioctl.c:685
> [<ffffffff88143885>] entry_SYSCALL_64_fastpath+0x23/0xc6
> arch/x86/entry/entry_64.S:209
>
> [<ffffffffffffffff>] 0xffffffffffffffff
> [<ffffffff810de0ff>] kvm_vm_ioctl_get_dirty_log+0x8f/0x210
> arch/x86/kvm/x86.c:3779
> [<ffffffff81083b34>] kvm_vm_ioctl+0x11e4/0x1a70
> arch/x86/kvm/../../../virt/kvm/kvm_main.c:2969
> [<     inline     >] vfs_ioctl fs/ioctl.c:43
> [<ffffffff81abcb44>] do_vfs_ioctl+0x1c4/0x1630 fs/ioctl.c:679
> [<     inline     >] SYSC_ioctl fs/ioctl.c:694
> [<ffffffff81abe044>] SyS_ioctl+0x94/0xc0 fs/ioctl.c:685
> [<ffffffff88143885>] entry_SYSCALL_64_fastpath+0x23/0xc6
> arch/x86/entry/entry_64.S:209
> [<ffffffffffffffff>] 0xffffffffffffffff
>
>
> INFO: task syz-executor:5833 blocked for more than 120 seconds.
>       Not tainted 4.9.0-rc5+ #28
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> syz-executor    D17872  5833   4082 0x00000004
>  ffff880033944780 ffff8800602f5100 ffff8800652b0c80 ffff8800391a2380
>  ffff88006d122cd8 ffff8800368763a8 ffffffff8812c15c 0000000041b58ab3
>  ffff88006d123668 ffff88006d123640 1ffff10006d0ec5c ffff88006d122cd8
> Call Trace:
>  [<ffffffff8812d8bd>] schedule+0x10d/0x460 kernel/sched/core.c:3457
>  [<ffffffff8812e7e5>] schedule_preempt_disabled+0x15/0x20
> kernel/sched/core.c:3490
>  [<     inline     >] __mutex_lock_common kernel/locking/mutex.c:582
>  [<ffffffff881339f6>] mutex_lock_nested+0x686/0xf20 kernel/locking/mutex.c:621
>  [<ffffffff81199143>] kvm_hv_set_msr_common+0x163/0x2a30
> arch/x86/kvm/hyperv.c:1145
>  [<ffffffff810bf7fb>] kvm_set_msr_common+0xb0b/0x23a0 arch/x86/kvm/x86.c:2261
>  [<ffffffff811cdd1d>] vmx_set_msr+0x27d/0xcb0 arch/x86/kvm/vmx.c:3149
>  [<ffffffff81094849>] kvm_set_msr+0xd9/0x170 arch/x86/kvm/x86.c:1084
>  [<ffffffff81094bf3>] do_set_msr+0x123/0x1a0 arch/x86/kvm/x86.c:1113
>  [<     inline     >] __msr_io arch/x86/kvm/x86.c:2523
>  [<ffffffff810bd280>] msr_io+0x250/0x460 arch/x86/kvm/x86.c:2560
>  [<ffffffff810d9ee0>] kvm_arch_vcpu_ioctl+0x360/0x44a0 arch/x86/kvm/x86.c:3401
>  [<ffffffff8107aee7>] kvm_vcpu_ioctl+0x237/0x11c0
> arch/x86/kvm/../../../virt/kvm/kvm_main.c:2708
>  [<     inline     >] vfs_ioctl fs/ioctl.c:43
>  [<ffffffff81abcb44>] do_vfs_ioctl+0x1c4/0x1630 fs/ioctl.c:679
>  [<     inline     >] SYSC_ioctl fs/ioctl.c:694
>  [<ffffffff81abe044>] SyS_ioctl+0x94/0xc0 fs/ioctl.c:685
>  [<ffffffff88143885>] entry_SYSCALL_64_fastpath+0x23/0xc6
>
> [ 3319.345108] Showing all locks held in the system:
> [ 3319.349897] 2 locks held by khungtaskd/1328:
> [ 3319.352888]  #0: [ 3319.354562]  (
> rcu_read_lock[ 3319.358168] ){......}
> , at: [ 3319.360511] [<ffffffff816f636c>] watchdog+0x1cc/0xd70
> [ 3319.363841]  #1: [ 3319.364761]  (
> tasklist_lock[ 3319.367215] ){.+.+..}
> , at: [ 3319.369197] [<ffffffff81560332>] debug_show_all_locks+0xd2/0x420
> [ 3319.374809] 3 locks held by syz-executor/5833:
> [ 3319.388745]  #0: [ 3319.390145]  (
> &vcpu->mutex[ 3319.391749] ){+.+.+.}
> , at: [ 3319.392313] [<ffffffff8106cd41>] vcpu_load+0x21/0x70
> [ 3319.396281]  #1: [ 3319.398802]  (
> &kvm->srcu[ 3319.399431] ){......}
> , at: [ 3319.399883] [<ffffffff810bd178>] msr_io+0x148/0x460
> [ 3319.403905]  #2: [ 3319.404639]  (
> &kvm->lock[ 3319.406582] ){+.+.+.}
> , at: [ 3319.409670] [<ffffffff81199143>] kvm_hv_set_msr_common+0x163/0x2a30
> [ 3319.422421] 2 locks held by syz-executor/5849:
> [ 3319.425646]  #0: [ 3319.426948]  (
> &kvm->lock[ 3319.427747] ){+.+.+.}
> , at: [ 3319.428368] [<ffffffff810def2e>] kvm_arch_vm_ioctl+0xb4e/0x23c0
> [ 3319.429594]  #1: [ 3319.429942]  (
> &kvm->slots_lock[ 3319.430881] ){+.+.+.}
> , at: [ 3319.431631] [<ffffffff8117f679>] kvm_create_pit+0x589/0x8c0
>
>
> On commit a25f0944ba9b1d8a6813fd6f1a86f1bd59ac25a6 (Nov 13)


kvm_vm_ioctl_get_dirty_log is probably unrelated because I also see
following deadlocks:

# cat /proc/7169/task/*/stack
[<ffffffff815b9be8>] __synchronize_srcu+0x2f8/0x4a0
[<ffffffff815b9df3>] synchronize_srcu_expedited+0x13/0x20
[<ffffffff8108891b>] kvm_io_bus_register_dev+0x2ab/0x3e0
[<ffffffff8117f676>] kvm_create_pit+0x5c6/0x8c0
[<ffffffff810df816>] kvm_arch_vm_ioctl+0x1406/0x23c0
[<ffffffff81082b4a>] kvm_vm_ioctl+0x1fa/0x1a70
[<ffffffff81abcb44>] do_vfs_ioctl+0x1c4/0x1630
[<ffffffff81abe044>] SyS_ioctl+0x94/0xc0
[<ffffffff88143885>] entry_SYSCALL_64_fastpath+0x23/0xc6
[<ffffffffffffffff>] 0xffffffffffffffff
[<ffffffff81199103>] kvm_hv_set_msr_common+0x163/0x2a30
[<ffffffff810bf82b>] kvm_set_msr_common+0xb0b/0x23a0
[<ffffffff811cdcdd>] vmx_set_msr+0x27d/0xcb0
[<ffffffff81094879>] kvm_set_msr+0xd9/0x170
[<ffffffff81094c23>] do_set_msr+0x123/0x1a0
[<ffffffff810bd2b0>] msr_io+0x250/0x460
[<ffffffff810d9f10>] kvm_arch_vcpu_ioctl+0x360/0x44a0
[<ffffffff8107aef7>] kvm_vcpu_ioctl+0x237/0x11c0
[<ffffffff81abcb44>] do_vfs_ioctl+0x1c4/0x1630
[<ffffffff81abe044>] SyS_ioctl+0x94/0xc0
[<ffffffff88143885>] entry_SYSCALL_64_fastpath+0x23/0xc6
[<ffffffffffffffff>] 0xffffffffffffffff

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ