[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20070410101922.728625bb@localhost>
Date: Tue, 10 Apr 2007 10:19:22 +0200
From: Paolo Ornati <ornati@...twebnet.it>
To: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: [XFS] INFO: possible circular locking dependency detected
I've not seen this reported...
[ 9564.772749]
[ 9564.772752] =======================================================
[ 9564.772757] [ INFO: possible circular locking dependency detected ]
[ 9564.772760] 2.6.21-rc6-gc2481cc4 #9
[ 9564.772762] -------------------------------------------------------
[ 9564.772765] xfssyncd/6157 is trying to acquire lock:
[ 9564.772767] (&(&ip->i_lock)->mr_lock){----}, at: [<ffffffff8031abae>] xfs_ilock_nowait+0x6d/0xc8
[ 9564.772775]
[ 9564.772776] but task is already holding lock:
[ 9564.772778] (&mp->m_ilock){--..}, at: [<ffffffff803348f6>] xfs_finish_reclaim_all+0x1d/0xc1
[ 9564.772785]
[ 9564.772786] which lock already depends on the new lock.
[ 9564.772787]
[ 9564.772789]
[ 9564.772790] the existing dependency chain (in reverse order) is:
[ 9564.772792]
[ 9564.772793] -> #1 (&mp->m_ilock){--..}:
[ 9564.772796] [<ffffffff802478f1>] __lock_acquire+0xa3d/0xbe4
[ 9564.772804] [<ffffffff8031b217>] xfs_iget_core+0x60e/0x6c4
[ 9564.772811] [<ffffffff80247b13>] lock_acquire+0x7b/0x9f
[ 9564.772817] [<ffffffff8031b217>] xfs_iget_core+0x60e/0x6c4
[ 9564.772824] [<ffffffff804f6f19>] __mutex_lock_slowpath+0xe1/0x263
[ 9564.772832] [<ffffffff8031b217>] xfs_iget_core+0x60e/0x6c4
[ 9564.772839] [<ffffffff8031b371>] xfs_iget+0xa4/0x14b
[ 9564.772845] [<ffffffff8032c9b8>] xfs_mountfs+0x708/0x954
[ 9564.772852] [<ffffffff8033b37f>] xfs_setsize_buftarg_flags+0x30/0x9e
[ 9564.772859] [<ffffffff80332a67>] xfs_mount+0x317/0x39d
[ 9564.772866] [<ffffffff803427ff>] xfs_fs_fill_super+0x0/0x1a1
[ 9564.772873] [<ffffffff8034287d>] xfs_fs_fill_super+0x7e/0x1a1
[ 9564.772880] [<ffffffff804f8586>] _spin_unlock+0x17/0x20
[ 9564.772887] [<ffffffff80277b65>] sget+0x377/0x389
[ 9564.772894] [<ffffffff802774e0>] set_bdev_super+0x0/0xf
[ 9564.772900] [<ffffffff802774ef>] test_bdev_super+0x0/0xd
[ 9564.772907] [<ffffffff802784b2>] get_sb_bdev+0x121/0x17b
[ 9564.772913] [<ffffffff80277ff5>] vfs_kern_mount+0x4f/0x8a
[ 9564.772920] [<ffffffff80278072>] do_kern_mount+0x36/0x4d
[ 9564.772926] [<ffffffff8028b3d6>] do_mount+0x625/0x699
[ 9564.772934] [<ffffffff80289e83>] mntput_no_expire+0x1c/0x85
[ 9564.772941] [<ffffffff8027ef8e>] link_path_walk+0xce/0xe0
[ 9564.772948] [<ffffffff80236f20>] sigprocmask+0x28/0xc1
[ 9564.772956] [<ffffffff80272a18>] kmem_cache_free+0xcf/0xd9
[ 9564.772963] [<ffffffff80246a13>] trace_hardirqs_on+0x124/0x14f
[ 9564.772970] [<ffffffff8025c17d>] get_page_from_freelist+0x1ef/0x35d
[ 9564.772977] [<ffffffff80246a13>] trace_hardirqs_on+0x124/0x14f
[ 9564.772984] [<ffffffff8025c358>] __alloc_pages+0x6d/0x2b4
[ 9564.772990] [<ffffffff8028b4d4>] sys_mount+0x8a/0xd7
[ 9564.772997] [<ffffffff8020962e>] system_call+0x7e/0x83
[ 9564.773004] [<ffffffffffffffff>] 0xffffffffffffffff
[ 9564.773014]
[ 9564.773015] -> #0 (&(&ip->i_lock)->mr_lock){----}:
[ 9564.773018] [<ffffffff8024611f>] print_circular_bug_header+0xcc/0xd3
[ 9564.773025] [<ffffffff802477ed>] __lock_acquire+0x939/0xbe4
[ 9564.773032] [<ffffffff8031abae>] xfs_ilock_nowait+0x6d/0xc8
[ 9564.773039] [<ffffffff80247b13>] lock_acquire+0x7b/0x9f
[ 9564.773045] [<ffffffff8031abae>] xfs_ilock_nowait+0x6d/0xc8
[ 9564.773052] [<ffffffff80242d08>] down_write_trylock+0x2f/0x35
[ 9564.773059] [<ffffffff8031abae>] xfs_ilock_nowait+0x6d/0xc8
[ 9564.773066] [<ffffffff80334919>] xfs_finish_reclaim_all+0x40/0xc1
[ 9564.773073] [<ffffffff80332567>] xfs_syncsub+0x4e/0x237
[ 9564.773080] [<ffffffff8023ff12>] keventd_create_kthread+0x0/0x6d
[ 9564.773087] [<ffffffff80341e68>] vfs_sync_worker+0x19/0x38
[ 9564.773094] [<ffffffff80342157>] xfssyncd+0x10c/0x15d
[ 9564.773101] [<ffffffff8034204b>] xfssyncd+0x0/0x15d
[ 9564.773107] [<ffffffff80240182>] kthread+0xd1/0x103
[ 9564.773114] [<ffffffff8020a4b8>] child_rip+0xa/0x12
[ 9564.773121] [<ffffffff804f884c>] _spin_unlock_irq+0x24/0x27
[ 9564.773127] [<ffffffff80209bcc>] restore_args+0x0/0x30
[ 9564.773134] [<ffffffff802400b1>] kthread+0x0/0x103
[ 9564.773140] [<ffffffff8020a4ae>] child_rip+0x0/0x12
[ 9564.773147] [<ffffffffffffffff>] 0xffffffffffffffff
[ 9564.773153]
[ 9564.773154] other info that might help us debug this:
[ 9564.773155]
[ 9564.773158] 1 lock held by xfssyncd/6157:
[ 9564.773159] #0: (&mp->m_ilock){--..}, at: [<ffffffff803348f6>] xfs_finish_reclaim_all+0x1d/0xc1
[ 9564.773166]
[ 9564.773167] stack backtrace:
[ 9564.773169]
[ 9564.773169] Call Trace:
[ 9564.773173] [<ffffffff80245d6a>] print_circular_bug_tail+0x69/0x72
[ 9564.773177] [<ffffffff8024611f>] print_circular_bug_header+0xcc/0xd3
[ 9564.773180] [<ffffffff802477ed>] __lock_acquire+0x939/0xbe4
[ 9564.773183] [<ffffffff8031abae>] xfs_ilock_nowait+0x6d/0xc8
[ 9564.773187] [<ffffffff80247b13>] lock_acquire+0x7b/0x9f
[ 9564.773190] [<ffffffff8031abae>] xfs_ilock_nowait+0x6d/0xc8
[ 9564.773194] [<ffffffff80242d08>] down_write_trylock+0x2f/0x35
[ 9564.773197] [<ffffffff8031abae>] xfs_ilock_nowait+0x6d/0xc8
[ 9564.773200] [<ffffffff80334919>] xfs_finish_reclaim_all+0x40/0xc1
[ 9564.773204] [<ffffffff80332567>] xfs_syncsub+0x4e/0x237
[ 9564.773207] [<ffffffff8023ff12>] keventd_create_kthread+0x0/0x6d
[ 9564.773211] [<ffffffff80341e68>] vfs_sync_worker+0x19/0x38
[ 9564.773214] [<ffffffff80342157>] xfssyncd+0x10c/0x15d
[ 9564.773217] [<ffffffff8034204b>] xfssyncd+0x0/0x15d
[ 9564.773220] [<ffffffff80240182>] kthread+0xd1/0x103
[ 9564.773224] [<ffffffff8020a4b8>] child_rip+0xa/0x12
[ 9564.773227] [<ffffffff804f884c>] _spin_unlock_irq+0x24/0x27
[ 9564.773230] [<ffffffff80209bcc>] restore_args+0x0/0x30
[ 9564.773233] [<ffffffff802400b1>] kthread+0x0/0x103
[ 9564.773236] [<ffffffff8020a4ae>] child_rip+0x0/0x12
[ 9564.773238]
--
Paolo Ornati
Linux 2.6.21-rc6-gc2481cc4 on x86_64
Download attachment "config.gz" of type "application/x-gzip" (9125 bytes)
Powered by blists - more mailing lists