lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a4423d670904200042r476aa2d2p9b8125fe3c8600ac@mail.gmail.com>
Date:	Mon, 20 Apr 2009 11:42:25 +0400
From:	Alexander Beregalov <a.beregalov@...il.com>
To:	LKML <linux-kernel@...r.kernel.org>,
	Kernel Testers List <kernel-testers@...r.kernel.org>,
	Ingo Molnar <mingo@...e.hu>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	linux-fsdevel@...r.kernel.org,
	Peter Zijlstra <peterz@...radead.org>, xfs@....sgi.com
Subject: Re: 2.6.30-rc2: BUG: MAX_LOCKDEP_ENTRIES too low! when mounting 
	rootfs

2009/4/15 Alexander Beregalov <a.beregalov@...il.com>:
> Hi
>
> bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive &
> transmit flow control ON
> BUG: MAX_LOCKDEP_ENTRIES too low!
> turning off the locking correctness validator.
> Pid: 2382, comm: mount Tainted: G        W  2.6.30-rc2-dirty #121
> Call Trace:
>  [<ffffffff80280aa7>] add_lock_to_list+0xf7/0x110
>  [<ffffffff80282ad1>] ? check_irq_usage+0xb1/0x110
>  [<ffffffff802df73c>] ? lock_super+0x3c/0x60
>  [<ffffffff80284f20>] __lock_acquire+0xe80/0x12b0
>  [<ffffffff802df73c>] ? lock_super+0x3c/0x60
>  [<ffffffff802853f0>] lock_acquire+0xa0/0xe0
>  [<ffffffff802df73c>] ? lock_super+0x3c/0x60
>  [<ffffffff805f0b50>] ? usbfs_fill_super+0x0/0xe0
>  [<ffffffff806d9178>] __mutex_lock_common+0x68/0x500
>  [<ffffffff802df73c>] ? lock_super+0x3c/0x60
>  [<ffffffff802df73c>] ? lock_super+0x3c/0x60
>  [<ffffffff805f0b50>] ? usbfs_fill_super+0x0/0xe0
>  [<ffffffff806d9738>] mutex_lock_nested+0x48/0x70
>  [<ffffffff802df73c>] lock_super+0x3c/0x60
>  [<ffffffff802dfa7a>] __fsync_super+0x2a/0xa0
>  [<ffffffff802dfb10>] fsync_super+0x20/0x50
>  [<ffffffff802f1f70>] ? shrink_dcache_sb+0x20/0x40
>  [<ffffffff802dfb8c>] do_remount_sb+0x4c/0x250
>  [<ffffffff802e08a5>] get_sb_single+0x75/0x100
>  [<ffffffff805f02b9>] usb_get_sb+0x29/0x50
>  [<ffffffff802df0e7>] vfs_kern_mount+0x67/0xf0
>  [<ffffffff802f942d>] ? get_fs_type+0x4d/0xf0
>  [<ffffffff802df21d>] do_kern_mount+0x5d/0x130
>  [<ffffffff802fcd88>] do_mount+0x2e8/0x930
>  [<ffffffff802a996e>] ? __get_free_pages+0x2e/0x80
>  [<ffffffff802fd4b0>] sys_mount+0xe0/0x120
>  [<ffffffff8020ba6b>] system_call_fastpath+0x16/0x1b

Another one with XFS

BUG: MAX_LOCKDEP_ENTRIES too low!
turning off the locking correctness validator.
Pid: 1717, comm: mv Tainted: G        W  2.6.30-rc2-00429-gd91dfbb-dirty #38
Call Trace:
 [<ffffffff8027e4d7>] add_lock_to_list+0xf7/0x110
 [<ffffffff80280501>] ? check_irq_usage+0xb1/0x110
 [<ffffffff8041afa2>] ? xfs_mod_incore_sb_batch+0x42/0x180
 [<ffffffff80282930>] __lock_acquire+0xe60/0x1290
 [<ffffffff804dbf3e>] ? debug_check_no_obj_freed+0xae/0x210
 [<ffffffff80281da7>] ? __lock_acquire+0x2d7/0x1290
 [<ffffffff8041afa2>] ? xfs_mod_incore_sb_batch+0x42/0x180
 [<ffffffff80282e00>] lock_acquire+0xa0/0xe0
 [<ffffffff8041afa2>] ? xfs_mod_incore_sb_batch+0x42/0x180
 [<ffffffff804dc02c>] ? debug_check_no_obj_freed+0x19c/0x210
 [<ffffffff806d92db>] _spin_lock+0x4b/0xa0
 [<ffffffff8041afa2>] ? xfs_mod_incore_sb_batch+0x42/0x180
 [<ffffffff8027d800>] ? trace_hardirqs_off+0x20/0x40
 [<ffffffff8041afa2>] xfs_mod_incore_sb_batch+0x42/0x180
 [<ffffffff8041e9a0>] xfs_trans_unreserve_and_mod_sb+0x260/0x310
 [<ffffffff80281130>] ? trace_hardirqs_on+0x20/0x40
 [<ffffffff8040ceb7>] ? xfs_log_ticket_put+0x57/0x80
 [<ffffffff8040ceb7>] ? xfs_log_ticket_put+0x57/0x80
 [<ffffffff8040ffd9>] ? xfs_log_done+0x89/0xf0
 [<ffffffff8041f909>] _xfs_trans_commit+0x229/0x450
 [<ffffffff8041cd03>] xfs_rename+0x563/0x7c0
 [<ffffffff802810ad>] ? trace_hardirqs_on_caller+0x18d/0x1f0
 [<ffffffff802e648e>] ? vfs_rename+0x14e/0x410
 [<ffffffff80434dc4>] xfs_vn_rename+0x74/0xa0
 [<ffffffff802e64f8>] vfs_rename+0x1b8/0x410
 [<ffffffff806d9b4f>] ? _spin_unlock+0x3f/0x80
 [<ffffffff802e92db>] sys_renameat+0x23b/0x280
 [<ffffffff806d9ff5>] ? _spin_unlock_irqrestore+0x55/0xb0
 [<ffffffff804d0d10>] ? __up_read+0x90/0xd0
 [<ffffffff8020c4ad>] ? retint_swapgs+0xe/0x13
 [<ffffffff802810ad>] ? trace_hardirqs_on_caller+0x18d/0x1f0
 [<ffffffff802e9349>] sys_rename+0x29/0x50
 [<ffffffff8020ba6b>] system_call_fastpath+0x16/0x1b
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ