[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+Zu95tBs-0EvdiAKzUOsb4tczRRfCRTpLr4bg_OP9HuVg@mail.gmail.com>
Date: Fri, 8 Jan 2016 17:58:33 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Oleg Nesterov <oleg@...hat.com>,
Chen Gang <gang.chen.5i5j@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Cc: syzkaller <syzkaller@...glegroups.com>,
Kostya Serebryany <kcc@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Eric Dumazet <edumazet@...gle.com>,
Sasha Levin <sasha.levin@...cle.com>
Subject: mm: possible deadlock in mm_take_all_locks
Hello,
I've hit the following deadlock warning while running syzkaller fuzzer
on commit b06f3a168cdcd80026276898fd1fee443ef25743. As far as I
understand this is a false positive, because both call stacks are
protected by mm_all_locks_mutex. What would be a way to annotate such
locking discipline?
======================================================
[ INFO: possible circular locking dependency detected ]
4.4.0-rc8+ #211 Not tainted
-------------------------------------------------------
syz-executor/11520 is trying to acquire lock:
(&mapping->i_mmap_rwsem){++++..}, at: [< inline >]
vm_lock_mapping mm/mmap.c:3159
(&mapping->i_mmap_rwsem){++++..}, at: [<ffffffff816e2e6d>]
mm_take_all_locks+0x1bd/0x5f0 mm/mmap.c:3207
but task is already holding lock:
(&hugetlbfs_i_mmap_rwsem_key){+.+...}, at: [< inline >]
vm_lock_mapping mm/mmap.c:3159
(&hugetlbfs_i_mmap_rwsem_key){+.+...}, at: [<ffffffff816e2e6d>]
mm_take_all_locks+0x1bd/0x5f0 mm/mmap.c:3207
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&hugetlbfs_i_mmap_rwsem_key){+.+...}:
[<ffffffff814472ec>] lock_acquire+0x1dc/0x430
kernel/locking/lockdep.c:3585
[<ffffffff81434989>] _down_write_nest_lock+0x49/0xa0
kernel/locking/rwsem.c:129
[< inline >] vm_lock_mapping mm/mmap.c:3159
[<ffffffff816e2e6d>] mm_take_all_locks+0x1bd/0x5f0 mm/mmap.c:3207
[<ffffffff817295a8>] do_mmu_notifier_register+0x328/0x420
mm/mmu_notifier.c:267
[<ffffffff817296c2>] mmu_notifier_register+0x22/0x30
mm/mmu_notifier.c:317
[< inline >] kvm_init_mmu_notifier
arch/x86/kvm/../../../virt/kvm/kvm_main.c:474
[< inline >] kvm_create_vm
arch/x86/kvm/../../../virt/kvm/kvm_main.c:592
[< inline >] kvm_dev_ioctl_create_vm
arch/x86/kvm/../../../virt/kvm/kvm_main.c:2966
[<ffffffff8101acea>] kvm_dev_ioctl+0x72a/0x920
arch/x86/kvm/../../../virt/kvm/kvm_main.c:2995
[< inline >] vfs_ioctl fs/ioctl.c:43
[<ffffffff817b66f1>] do_vfs_ioctl+0x681/0xe40 fs/ioctl.c:607
[< inline >] SYSC_ioctl fs/ioctl.c:622
[<ffffffff817b6f3f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:613
[<ffffffff85e77af6>] entry_SYSCALL_64_fastpath+0x16/0x7a
arch/x86/entry/entry_64.S:185
-> #0 (&mapping->i_mmap_rwsem){++++..}:
[< inline >] check_prev_add kernel/locking/lockdep.c:1853
[< inline >] check_prevs_add kernel/locking/lockdep.c:1958
[< inline >] validate_chain kernel/locking/lockdep.c:2144
[<ffffffff8144398d>] __lock_acquire+0x320d/0x4720
kernel/locking/lockdep.c:3206
[< inline >] __lock_release kernel/locking/lockdep.c:3432
[<ffffffff81447e17>] lock_release+0x697/0xce0
kernel/locking/lockdep.c:3604
[<ffffffff81434ada>] up_write+0x1a/0x60 kernel/locking/rwsem.c:91
[< inline >] i_mmap_unlock_write include/linux/fs.h:504
[< inline >] vm_unlock_mapping mm/mmap.c:3254
[<ffffffff816e2bf6>] mm_drop_all_locks+0x266/0x320 mm/mmap.c:3278
[<ffffffff81729506>] do_mmu_notifier_register+0x286/0x420
mm/mmu_notifier.c:292
[<ffffffff817296c2>] mmu_notifier_register+0x22/0x30
mm/mmu_notifier.c:317
[< inline >] kvm_init_mmu_notifier
arch/x86/kvm/../../../virt/kvm/kvm_main.c:474
[< inline >] kvm_create_vm
arch/x86/kvm/../../../virt/kvm/kvm_main.c:592
[< inline >] kvm_dev_ioctl_create_vm
arch/x86/kvm/../../../virt/kvm/kvm_main.c:2966
[<ffffffff8101acea>] kvm_dev_ioctl+0x72a/0x920
arch/x86/kvm/../../../virt/kvm/kvm_main.c:2995
[< inline >] vfs_ioctl fs/ioctl.c:43
[<ffffffff817b66f1>] do_vfs_ioctl+0x681/0xe40 fs/ioctl.c:607
[< inline >] SYSC_ioctl fs/ioctl.c:622
[<ffffffff817b6f3f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:613
[<ffffffff85e77af6>] entry_SYSCALL_64_fastpath+0x16/0x7a
arch/x86/entry/entry_64.S:185
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&hugetlbfs_i_mmap_rwsem_key);
lock(&mapping->i_mmap_rwsem);
lock(&hugetlbfs_i_mmap_rwsem_key);
lock(&mapping->i_mmap_rwsem);
*** DEADLOCK ***
3 locks held by syz-executor/11520:
#0: (&mm->mmap_sem){++++++}, at: [<ffffffff817295a0>]
do_mmu_notifier_register+0x320/0x420 mm/mmu_notifier.c:266
#1: (mm_all_locks_mutex){+.+...}, at: [<ffffffff816e2cf7>]
mm_take_all_locks+0x47/0x5f0 mm/mmap.c:3201
#2: (&hugetlbfs_i_mmap_rwsem_key){+.+...}, at: [< inline >]
vm_lock_mapping mm/mmap.c:3159
#2: (&hugetlbfs_i_mmap_rwsem_key){+.+...}, at: [<ffffffff816e2e6d>]
mm_take_all_locks+0x1bd/0x5f0 mm/mmap.c:3207
stack backtrace:
CPU: 2 PID: 11520 Comm: syz-executor Not tainted 4.4.0-rc8+ #211
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
00000000ffffffff ffff88003613fa10 ffffffff82907ccd ffffffff88911190
ffffffff88911190 ffffffff889321c0 ffff88003613fa60 ffffffff8143cb68
ffff880034bbaf00 ffff880034bbb73a 0000000000000000 ffff880034bbb718
Call Trace:
[< inline >] __dump_stack lib/dump_stack.c:15
[<ffffffff82907ccd>] dump_stack+0x6f/0xa2 lib/dump_stack.c:50
[<ffffffff8143cb68>] print_circular_bug+0x288/0x340
kernel/locking/lockdep.c:1226
[< inline >] check_prev_add kernel/locking/lockdep.c:1853
[< inline >] check_prevs_add kernel/locking/lockdep.c:1958
[< inline >] validate_chain kernel/locking/lockdep.c:2144
[<ffffffff8144398d>] __lock_acquire+0x320d/0x4720 kernel/locking/lockdep.c:3206
[< inline >] __lock_release kernel/locking/lockdep.c:3432
[<ffffffff81447e17>] lock_release+0x697/0xce0 kernel/locking/lockdep.c:3604
[<ffffffff81434ada>] up_write+0x1a/0x60 kernel/locking/rwsem.c:91
[< inline >] i_mmap_unlock_write include/linux/fs.h:504
[< inline >] vm_unlock_mapping mm/mmap.c:3254
[<ffffffff816e2bf6>] mm_drop_all_locks+0x266/0x320 mm/mmap.c:3278
[<ffffffff81729506>] do_mmu_notifier_register+0x286/0x420 mm/mmu_notifier.c:292
[<ffffffff817296c2>] mmu_notifier_register+0x22/0x30 mm/mmu_notifier.c:317
[< inline >] kvm_init_mmu_notifier
arch/x86/kvm/../../../virt/kvm/kvm_main.c:474
[< inline >] kvm_create_vm
arch/x86/kvm/../../../virt/kvm/kvm_main.c:592
[< inline >] kvm_dev_ioctl_create_vm
arch/x86/kvm/../../../virt/kvm/kvm_main.c:2966
[<ffffffff8101acea>] kvm_dev_ioctl+0x72a/0x920
arch/x86/kvm/../../../virt/kvm/kvm_main.c:2995
[< inline >] vfs_ioctl fs/ioctl.c:43
[<ffffffff817b66f1>] do_vfs_ioctl+0x681/0xe40 fs/ioctl.c:607
[< inline >] SYSC_ioctl fs/ioctl.c:622
[<ffffffff817b6f3f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:613
[<ffffffff85e77af6>] entry_SYSCALL_64_fastpath+0x16/0x7a
arch/x86/entry/entry_64.S:185
Powered by blists - more mailing lists