lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a4423d670903060130y61407cccp71d50d13c06a4bd@mail.gmail.com>
Date:	Fri, 6 Mar 2009 12:30:53 +0300
From:	Alexander Beregalov <a.beregalov@...il.com>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Felix Blyakher <felixb@....com>,
	Eric Sandeen <sandeen@...deen.net>,
	"linux-next@...r.kernel.org" <linux-next@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>, xfs@....sgi.com
Subject: Re: next-20090220: XFS: inconsistent lock state

Hi

Is it the same issue?

[ INFO: inconsistent lock state ]
2.6.29-rc7-next-20090305 #8
---------------------------------
inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-R} usage.
kswapd0/318 [HC0[0]:SC0[0]:HE1:SE1] takes:
 (&(&ip->i_lock)->mr_lock){+++++?}, at: [<ffffffff803cbc4a>]
xfs_ilock+0xaa/0x120
{RECLAIM_FS-ON-W} state was registered at:
  [<ffffffff8026db49>] mark_held_locks+0x69/0x90
  [<ffffffff8026dbb1>] lockdep_trace_alloc+0x41/0xb0
  [<ffffffff802c6cbd>] kmem_cache_alloc+0x2d/0x100
  [<ffffffff803f37e7>] kmem_zone_alloc+0x97/0xe0
  [<ffffffff803f3849>] kmem_zone_zalloc+0x19/0x50
  [<ffffffff803b4955>] xfs_da_state_alloc+0x15/0x20
  [<ffffffff803c0387>] xfs_dir2_node_lookup+0x17/0x110
  [<ffffffff803b9688>] xfs_dir_lookup+0x1c8/0x1e0
  [<ffffffff803f08af>] xfs_lookup+0x4f/0xe0
  [<ffffffff803fcb99>] xfs_vn_lookup+0x49/0x90
  [<ffffffff802d5556>] do_lookup+0x1b6/0x250
  [<ffffffff802d5885>] __link_path_walk+0x295/0xec0
  [<ffffffff802d66ee>] path_walk+0x6e/0xe0
  [<ffffffff802d6866>] do_path_lookup+0xa6/0x1d0
  [<ffffffff802d6a65>] path_lookup_open+0x65/0xd0
  [<ffffffff802d796a>] do_filp_open+0xaa/0x8f0
  [<ffffffff802c8a48>] do_sys_open+0x78/0x110
  [<ffffffff802c8b0b>] sys_open+0x1b/0x20
  [<ffffffff8020bbdb>] system_call_fastpath+0x16/0x1b
  [<ffffffffffffffff>] 0xffffffffffffffff
irq event stamp: 531011
hardirqs last  enabled at (531011): [<ffffffff80284194>]
__rcu_read_unlock+0xa4/0xc0
hardirqs last disabled at (531010): [<ffffffff80284149>]
__rcu_read_unlock+0x59/0xc0
softirqs last  enabled at (524334): [<ffffffff80249719>]
__do_softirq+0x139/0x150
softirqs last disabled at (524329): [<ffffffff8020cd5c>] call_softirq+0x1c/0x30

other info that might help us debug this:
2 locks held by kswapd0/318:
 #0:  (shrinker_rwsem){++++..}, at: [<ffffffff802a4c12>] shrink_slab+0x32/0x1c0
 #1:  (iprune_mutex){+.+.-.}, at: [<ffffffff802dfbc4>]
shrink_icache_memory+0x84/0x2a0

stack backtrace:
Pid: 318, comm: kswapd0 Not tainted 2.6.29-rc7-next-20090305 #8
Call Trace:
 [<ffffffff8026ca5d>] print_usage_bug+0x17d/0x190
 [<ffffffff8026d9ad>] mark_lock+0x31d/0x450
 [<ffffffff8026cbd0>] ? check_usage_forwards+0x0/0xc0
 [<ffffffff8026ebcd>] __lock_acquire+0x40d/0x12a0
 [<ffffffff8026faf1>] lock_acquire+0x91/0xc0
 [<ffffffff803cbc4a>] ? xfs_ilock+0xaa/0x120
 [<ffffffff8025f7d0>] down_read_nested+0x50/0x90
 [<ffffffff803cbc4a>] ? xfs_ilock+0xaa/0x120
 [<ffffffff803cbc4a>] xfs_ilock+0xaa/0x120
 [<ffffffff803f09c4>] xfs_free_eofblocks+0x84/0x280
 [<ffffffff8026ea8c>] ? __lock_acquire+0x2cc/0x12a0
 [<ffffffff803f144e>] xfs_inactive+0xee/0x540
 [<ffffffff803fea57>] xfs_fs_clear_inode+0x67/0x70
 [<ffffffff802df7fa>] clear_inode+0x9a/0x120
 [<ffffffff802dfa60>] dispose_list+0x30/0x110
 [<ffffffff802dfd88>] shrink_icache_memory+0x248/0x2a0
 [<ffffffff802a4d3c>] shrink_slab+0x15c/0x1c0
 [<ffffffff802a6cba>] kswapd+0x56a/0x6b0
 [<ffffffff802374b6>] ? finish_task_switch+0x46/0x110
 [<ffffffff802a4120>] ? isolate_pages_global+0x0/0x270
 [<ffffffff8025b450>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff802a6750>] ? kswapd+0x0/0x6b0
 [<ffffffff8025af66>] kthread+0x56/0x90
 [<ffffffff8020cc5a>] child_rip+0xa/0x20
 [<ffffffff802374f9>] ? finish_task_switch+0x89/0x110
 [<ffffffff8063da36>] ? _spin_unlock_irq+0x36/0x60
 [<ffffffff8020c640>] ? restore_args+0x0/0x30
 [<ffffffff8025af10>] ? kthread+0x0/0x90
 [<ffffffff8020cc50>] ? child_rip+0x0/0x20
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ