[<prev] [next>] [day] [month] [year] [list]
Message-ID: <55003A7A.1090305@bmw-carit.de>
Date: Wed, 11 Mar 2015 13:52:10 +0100
From: Daniel Wagner <daniel.wagner@...-carit.de>
To: <linux-fsdevel@...r.kernel.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: possible circular locking dependency detected
Hi,
I am seeing this info when I boot up my kvm guest. I think I haven't
seen any reports on this one. In case I missed the report, sorry about
the noise.
[ 92.867888] ======================================================
[ 92.868440] [ INFO: possible circular locking dependency detected ]
[ 92.868591] 4.0.0-rc3 #1 Not tainted
[ 92.868591] -------------------------------------------------------
[ 92.868591] sulogin/1617 is trying to acquire lock:
[ 92.868591] (&isec->lock){+.+.+.}, at: [<ffffffff8149e185>] inode_doinit_with_dentry+0xa5/0x680
[ 92.868591]
[ 92.868591] but task is already holding lock:
[ 92.868591] (&mm->mmap_sem){++++++}, at: [<ffffffff8118635f>] vm_mmap_pgoff+0x6f/0xc0
[ 92.868591]
[ 92.868591] which lock already depends on the new lock.
[ 92.868591]
[ 92.868591]
[ 92.868591] the existing dependency chain (in reverse order) is:
[ 92.868591]
-> #2 (&mm->mmap_sem){++++++}:
[ 92.868591] [<ffffffff810a7ae5>] lock_acquire+0xd5/0x2a0
[ 92.868591] [<ffffffff8119879c>] might_fault+0x8c/0xb0
[ 92.868591] [<ffffffff811e6832>] filldir+0x92/0x120
[ 92.868591] [<ffffffff8138880b>] xfs_dir2_block_getdents.isra.12+0x19b/0x1f0
[ 92.868591] [<ffffffff81388994>] xfs_readdir+0x134/0x2f0
[ 92.868591] [<ffffffff8138b78b>] xfs_file_readdir+0x2b/0x30
[ 92.868591] [<ffffffff811e660a>] iterate_dir+0x9a/0x140
[ 92.868591] [<ffffffff811e6af1>] SyS_getdents+0x81/0x100
[ 92.868591] [<ffffffff81b5cfb2>] system_call_fastpath+0x12/0x17
[ 92.868591]
-> #1 (&xfs_dir_ilock_class){++++.+}:
[ 92.868591] [<ffffffff810a7ae5>] lock_acquire+0xd5/0x2a0
[ 92.868591] [<ffffffff8109feb7>] down_read_nested+0x57/0xa0
[ 92.868591] [<ffffffff8139b612>] xfs_ilock+0x92/0x290
[ 92.868591] [<ffffffff8139b888>] xfs_ilock_attr_map_shared+0x38/0x50
[ 92.868591] [<ffffffff8133c081>] xfs_attr_get+0xc1/0x180
[ 92.868591] [<ffffffff813aa9d7>] xfs_xattr_get+0x37/0x50
[ 92.868591] [<ffffffff811fb21f>] generic_getxattr+0x4f/0x70
[ 92.868591] [<ffffffff8149e232>] inode_doinit_with_dentry+0x152/0x680
[ 92.868591] [<ffffffff8149e83b>] sb_finish_set_opts+0xdb/0x260
[ 92.868591] [<ffffffff8149ec84>] selinux_set_mnt_opts+0x2c4/0x600
[ 92.868591] [<ffffffff8149f024>] superblock_doinit+0x64/0xd0
[ 92.868591] [<ffffffff8149f0a0>] delayed_superblock_init+0x10/0x20
[ 92.868591] [<ffffffff811d2d52>] iterate_supers+0xb2/0x110
[ 92.868591] [<ffffffff8149f333>] selinux_complete_init+0x33/0x40
[ 92.868591] [<ffffffff814aea46>] security_load_policy+0xf6/0x560
[ 92.868591] [<ffffffff814a0d42>] sel_write_load+0xa2/0x740
[ 92.868591] [<ffffffff811cf92a>] vfs_write+0xba/0x200
[ 92.868591] [<ffffffff811d00a9>] SyS_write+0x49/0xb0
[ 92.868591] [<ffffffff81b5cfb2>] system_call_fastpath+0x12/0x17
[ 92.868591]
-> #0 (&isec->lock){+.+.+.}:
[ 92.868591] [<ffffffff810a6a4e>] __lock_acquire+0x1ede/0x1ee0
[ 92.868591] [<ffffffff810a7ae5>] lock_acquire+0xd5/0x2a0
[ 92.868591] [<ffffffff81b588be>] mutex_lock_nested+0x6e/0x3f0
[ 92.868591] [<ffffffff8149e185>] inode_doinit_with_dentry+0xa5/0x680
[ 92.868591] [<ffffffff8149f2fc>] selinux_d_instantiate+0x1c/0x20
[ 92.868591] [<ffffffff81491b4b>] security_d_instantiate+0x1b/0x30
[ 92.868591] [<ffffffff811e9f74>] d_instantiate+0x54/0x80
[ 92.868591] [<ffffffff8118215d>] __shmem_file_setup+0xcd/0x230
[ 92.868591] [<ffffffff81185e28>] shmem_zero_setup+0x28/0x70
[ 92.868591] [<ffffffff811a2408>] mmap_region+0x5d8/0x5f0
[ 92.868591] [<ffffffff811a273b>] do_mmap_pgoff+0x31b/0x400
[ 92.868591] [<ffffffff81186380>] vm_mmap_pgoff+0x90/0xc0
[ 92.868591] [<ffffffff811a0ae6>] SyS_mmap_pgoff+0x106/0x290
[ 92.868591] [<ffffffff81008a22>] SyS_mmap+0x22/0x30
[ 92.868591] [<ffffffff81b5cfb2>] system_call_fastpath+0x12/0x17
[ 92.868591]
[ 92.868591] other info that might help us debug this:
[ 92.868591]
[ 92.868591] Chain exists of:
&isec->lock --> &xfs_dir_ilock_class --> &mm->mmap_sem
[ 92.868591] Possible unsafe locking scenario:
[ 92.868591]
[ 92.868591] CPU0 CPU1
[ 92.868591] ---- ----
[ 92.868591] lock(&mm->mmap_sem);
[ 92.868591] lock(&xfs_dir_ilock_class);
[ 92.868591] lock(&mm->mmap_sem);
[ 92.868591] lock(&isec->lock);
[ 92.868591]
[ 92.868591] *** DEADLOCK ***
[ 92.868591]
[ 92.868591] 1 lock held by sulogin/1617:
[ 92.868591] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8118635f>] vm_mmap_pgoff+0x6f/0xc0
[ 92.868591]
[ 92.868591] stack backtrace:
[ 92.868591] CPU: 0 PID: 1617 Comm: sulogin Not tainted 4.0.0-rc3 #1
[ 92.868591] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140709_153950- 04/01/2014
[ 92.868591] ffffffff82e6e980 ffff880078d279f8 ffffffff81b508c5 0000000000000007
[ 92.868591] ffffffff82e31af0 ffff880078d27a48 ffffffff810a30bd ffff880078fd87a0
[ 92.868591] ffff880078d27ac8 ffff880078d27a48 ffff880078fd8000 0000000000000001
[ 92.868591] Call Trace:
[ 92.868591] [<ffffffff81b508c5>] dump_stack+0x4c/0x65
[ 92.868591] [<ffffffff810a30bd>] print_circular_bug+0x1cd/0x230
[ 92.868591] [<ffffffff810a6a4e>] __lock_acquire+0x1ede/0x1ee0
[ 92.868591] [<ffffffff810a0be5>] ? __bfs+0x105/0x240
[ 92.868591] [<ffffffff810a7ae5>] lock_acquire+0xd5/0x2a0
[ 92.868591] [<ffffffff8149e185>] ? inode_doinit_with_dentry+0xa5/0x680
[ 92.868591] [<ffffffff81b588be>] mutex_lock_nested+0x6e/0x3f0
[ 92.868591] [<ffffffff8149e185>] ? inode_doinit_with_dentry+0xa5/0x680
[ 92.868591] [<ffffffff811e9ef5>] ? __d_instantiate+0xd5/0x100
[ 92.868591] [<ffffffff8149e185>] ? inode_doinit_with_dentry+0xa5/0x680
[ 92.868591] [<ffffffff811e9f69>] ? d_instantiate+0x49/0x80
[ 92.868591] [<ffffffff8149e185>] inode_doinit_with_dentry+0xa5/0x680
[ 92.868591] [<ffffffff811e9f69>] ? d_instantiate+0x49/0x80
[ 92.868591] [<ffffffff8149f2fc>] selinux_d_instantiate+0x1c/0x20
[ 92.868591] [<ffffffff81491b4b>] security_d_instantiate+0x1b/0x30
[ 92.868591] [<ffffffff811e9f74>] d_instantiate+0x54/0x80
[ 92.868591] [<ffffffff8118215d>] __shmem_file_setup+0xcd/0x230
[ 92.868591] [<ffffffff81185e28>] shmem_zero_setup+0x28/0x70
[ 92.868591] [<ffffffff811a2408>] mmap_region+0x5d8/0x5f0
[ 92.868591] [<ffffffff811a273b>] do_mmap_pgoff+0x31b/0x400
[ 92.868591] [<ffffffff8118635f>] ? vm_mmap_pgoff+0x6f/0xc0
[ 92.868591] [<ffffffff81186380>] vm_mmap_pgoff+0x90/0xc0
[ 92.868591] [<ffffffff811a0ae6>] SyS_mmap_pgoff+0x106/0x290
[ 92.868591] [<ffffffff81507bfb>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[ 92.868591] [<ffffffff81008a22>] SyS_mmap+0x22/0x30
[ 92.868591] [<ffffffff81b5cfb2>] system_call_fastpath+0x12/0x17
cheers,
daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists