[<prev] [next>] [day] [month] [year] [list]
Message-ID: <55B11E8B.20008@oracle.com>
Date: Thu, 23 Jul 2015 13:04:11 -0400
From: Sasha Levin <sasha.levin@...cle.com>
To: Al Viro <viro@...IV.linux.org.uk>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
CC: LKML <linux-kernel@...r.kernel.org>, Dave Jones <davej@...hat.com>
Subject: fs: circular locking dependency cred_guard_mutex vs i_mutex_key
Hi all,
While fuzzing with trinity in a KVM tools guest running mainline, I've stumbled on:
[4660967.565503] ======================================================
[4660967.566475] [ INFO: possible circular locking dependency detected ]
[4660967.568699] 4.2.0-rc3-sasha-00059-g77b356f #2377 Not tainted
[4660967.570385] -------------------------------------------------------
[4660967.572650] trinity-main/12372 is trying to acquire lock:
[4660967.575752] (&sig->cred_guard_mutex){+.+.+.}, at: mm_access (kernel/fork.c:794)
[4660967.580706] Mutex: counter: 1 owner: None
[4660967.581685]
[4660967.581685] but task is already holding lock:
[4660967.591344] (&sb->s_type->i_mutex_key){+.+.+.}, at: walk_component (fs/namei.c:1610 fs/namei.c:1717)
[4660967.593698] Mutex: counter: -1 owner: trinity-main
[4660967.594961]
[4660967.594961] which lock already depends on the new lock.
[4660967.594961]
[4660967.597555]
[4660967.597555] the existing dependency chain (in reverse order) is:
[4660967.599643]
-> #1 (&sb->s_type->i_mutex_key){+.+.+.}:
[4660967.601090] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3620)
[4660967.602556] mutex_lock_nested (kernel/locking/mutex.c:526 kernel/locking/mutex.c:617)
[4660967.604054] walk_component (fs/namei.c:1610 fs/namei.c:1717)
[4660967.605477] link_path_walk (fs/namei.c:1937)
[4660967.607695] path_openat (fs/namei.c:3295)
[4660967.610822] do_filp_open (fs/namei.c:3330)
[4660967.613921] do_open_execat (fs/exec.c:772)
[4660967.617512] do_execveat_common.isra.26 (fs/exec.c:1524)
[4660967.621455] SyS_execve (fs/exec.c:1704)
[4660967.624307] return_from_execve (arch/x86/entry/entry_64.S:427)
[4660967.627565]
-> #0 (&sig->cred_guard_mutex){+.+.+.}:
[4660967.630367] __lock_acquire (kernel/locking/lockdep.c:1877 kernel/locking/lockdep.c:1982 kernel/locking/lockdep.c:2168 kernel/locking/lockdep.c:3239)
[4660967.633868] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3620)
[4660967.637134] mutex_lock_killable_nested (kernel/locking/mutex.c:526 kernel/locking/mutex.c:637)
[4660967.639062] mm_access (kernel/fork.c:794)
[4660967.640437] map_files_d_revalidate (fs/proc/base.c:1877)
[4660967.642190] lookup_dcache (fs/namei.c:1442)
[4660967.643732] __lookup_hash (fs/namei.c:1497)
[4660967.645200] walk_component (fs/namei.c:1611 fs/namei.c:1717)
[4660967.646616] path_lookupat (fs/namei.c:2098)
[4660967.648336] filename_lookup (fs/namei.c:2132)
[4660967.649803] user_path_at_empty (fs/namei.c:2301)
[4660967.651389] vfs_fstatat (include/linux/namei.h:52 fs/stat.c:106)
[4660967.652970] SYSC_newfstatat (fs/stat.c:298)
[4660967.654830] SyS_newfstatat (fs/stat.c:291)
[4660967.656443] tracesys_phase2 (arch/x86/entry/entry_64.S:266)
[4660967.658175]
[4660967.658175] other info that might help us debug this:
[4660967.658175]
[4660967.660326] Possible unsafe locking scenario:
[4660967.660326]
[4660967.661977] CPU0 CPU1
[4660967.663262] ---- ----
[4660967.664572] lock(&sb->s_type->i_mutex_key);
[4660967.665954] lock(&sig->cred_guard_mutex);
[4660967.667885] lock(&sb->s_type->i_mutex_key);
[4660967.669786] lock(&sig->cred_guard_mutex);
[4660967.671074]
[4660967.671074] *** DEADLOCK ***
[4660967.671074]
[4660967.672878] 1 lock held by trinity-main/12372:
[4660967.674068] #0: (&sb->s_type->i_mutex_key){+.+.+.}, at: walk_component (fs/namei.c:1610 fs/namei.c:1717)
[4660967.676808] Mutex: counter: -1 owner: trinity-main
[4660967.678088]
[4660967.678088] stack backtrace:
[4660967.679286] CPU: 9 PID: 12372 Comm: trinity-main Not tainted 4.2.0-rc3-sasha-00059-g77b356f #2377
[4660967.681463] ffffffffad09b510 ffff880065207948 ffffffffaa16bf08 0000000000000011
[4660967.683584] ffffffffad09b510 ffff880065207998 ffffffffa71bdcf1 ffff88006a963cc0
[4660967.685692] ffff8800652079f8 ffff880065207998 ffff88006a963c88 0000000000000001
[4660967.687704] Call Trace:
[4660967.688322] dump_stack (lib/dump_stack.c:52)
[4660967.689545] print_circular_bug (kernel/locking/lockdep.c:1252)
[4660967.691018] __lock_acquire (kernel/locking/lockdep.c:1877 kernel/locking/lockdep.c:1982 kernel/locking/lockdep.c:2168 kernel/locking/lockdep.c:3239)
[4660967.692588] lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3620)
[4660967.693929] ? mm_access (kernel/fork.c:794)
[4660967.695229] ? ___might_sleep (kernel/sched/core.c:7401 (discriminator 1))
[4660967.696574] ? mm_access (kernel/fork.c:794)
[4660967.697885] mutex_lock_killable_nested (kernel/locking/mutex.c:526 kernel/locking/mutex.c:637)
[4660967.699571] ? mm_access (kernel/fork.c:794)
[4660967.700852] mm_access (kernel/fork.c:794)
[4660967.702080] ? get_pid_task (kernel/pid.c:478)
[4660967.703371] map_files_d_revalidate (fs/proc/base.c:1877)
[4660967.704903] ? d_lookup (fs/dcache.c:2249)
[4660967.706066] ? lookup_dcache (fs/namei.c:1439)
[4660967.707681] lookup_dcache (fs/namei.c:1442)
[4660967.709127] ? walk_component (fs/namei.c:1610 fs/namei.c:1717)
[4660967.710730] __lookup_hash (fs/namei.c:1497)
[4660967.712147] walk_component (fs/namei.c:1611 fs/namei.c:1717)
[4660967.713812] path_lookupat (fs/namei.c:2098)
[4660967.715424] ? __might_fault (mm/memory.c:3763)
[4660967.716943] filename_lookup (fs/namei.c:2132)
[4660967.718894] ? kmem_cache_alloc (include/trace/events/kmem.h:53 mm/slub.c:2522)
[4660967.720811] ? getname_flags (fs/namei.c:135)
[4660967.722605] user_path_at_empty (fs/namei.c:2301)
[4660967.724447] vfs_fstatat (include/linux/namei.h:52 fs/stat.c:106)
[4660967.726087] SYSC_newfstatat (fs/stat.c:298)
[4660967.727678] ? lock_is_held (kernel/locking/lockdep.c:3661)
[4660967.728802] ? syscall_trace_enter_phase2 (arch/x86/kernel/ptrace.c:1592)
[4660967.730349] SyS_newfstatat (fs/stat.c:291)
[4660967.731761] tracesys_phase2 (arch/x86/entry/entry_64.S:266)
Thanks,
Sasha
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists