[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a44ae5cd1001131847l582ad373j9a5c944327a06a0a@mail.gmail.com>
Date: Wed, 13 Jan 2010 21:47:11 -0500
From: Miles Lane <miles.lane@...il.com>
To: "Eric W. Biederman" <ebiederm@...ssion.com>
Cc: "Rafael J. Wysocki" <rjw@...k.pl>,
Américo Wang <xiyou.wangcong@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Greg Kroah-Hartman <gregkh@...e.de>,
Jesse Barnes <jbarnes@...tuousgeek.org>,
Len Brown <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
Arjan van de Ven <arjan@...radead.org>,
Tejun Heo <tj@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>
Subject: Re: 2.6.33-rc3 -- INFO: possible recursive locking --
(s_active){++++.+}, at: [<c10d2941>] sysfs_hash_and_remove+0x3d/0x4f
Hmm. I tried your patch applied to a clean 2.6.33-rc4 tree, and used
the same .config file. This time the INFO looks a lot like my
original bug report. Weird. Maybe I am doing something wrong!
[ 291.124830] =============================================
[ 291.124837] [ INFO: possible recursive locking detected ]
[ 291.124846] 2.6.33-rc4 #3
[ 291.124851] ---------------------------------------------
[ 291.124858] pm-suspend/4725 is trying to acquire lock:
[ 291.124865] (s_active){++++.+}, at: [<c10f6115>]
sysfs_hash_and_remove+0x3d/0x4f
[ 291.124888]
[ 291.124890] but task is already holding lock:
[ 291.124896] (s_active){++++.+}, at: [<c10f792d>]
sysfs_get_active_two+0x16/0x34
[ 291.124914]
[ 291.124917] other info that might help us debug this:
[ 291.124925] 6 locks held by pm-suspend/4725:
[ 291.124930] #0: (&buffer->mutex){+.+.+.}, at: [<c10f68d0>]
sysfs_write_file+0x25/0xeb
[ 291.124949] #1: (s_active){++++.+}, at: [<c10f792d>]
sysfs_get_active_two+0x16/0x34
[ 291.124968] #2: (s_active/1){.+.+.+}, at: [<c10f7938>]
sysfs_get_active_two+0x21/0x34
[ 291.124990] #3: (pm_mutex){+.+.+.}, at: [<c1066f92>] enter_state+0x26/0x114
[ 291.125010] #4: (cpu_add_remove_lock){+.+.+.}, at: [<c10359dc>]
cpu_maps_update_begin+0xf/0x11
[ 291.125030] #5: (cpu_hotplug.lock){+.+.+.}, at: [<c1035a0a>]
cpu_hotplug_begin+0x1d/0x40
[ 291.125049]
[ 291.125051] stack backtrace:
[ 291.125060] Pid: 4725, comm: pm-suspend Not tainted 2.6.33-rc4 #3
[ 291.125067] Call Trace:
[ 291.125081] [<c12f458f>] ? printk+0xf/0x18
[ 291.125094] [<c105c00d>] __lock_acquire+0x811/0xb67
[ 291.125108] [<c105ae18>] ? mark_held_locks+0x43/0x5b
[ 291.125121] [<c105b1f6>] ? debug_check_no_locks_freed+0x108/0x126
[ 291.125134] [<c105b0b9>] ? trace_hardirqs_on_caller+0x119/0x141
[ 291.125147] [<c10f6115>] ? sysfs_hash_and_remove+0x3d/0x4f
[ 291.125160] [<c105c406>] lock_acquire+0xa3/0xcd
[ 291.125172] [<c10f6115>] ? sysfs_hash_and_remove+0x3d/0x4f
[ 291.125186] [<c10f77c2>] sysfs_addrm_finish+0xa6/0x10a
[ 291.125198] [<c10f6115>] ? sysfs_hash_and_remove+0x3d/0x4f
[ 291.125214] [<c10f6115>] sysfs_hash_and_remove+0x3d/0x4f
[ 291.125227] [<c10f867d>] sysfs_remove_group+0x52/0x81
[ 291.125240] [<c12f2b05>] mc_cpu_callback+0x73/0x9a
[ 291.125253] [<c104fa68>] notifier_call_chain+0x51/0x78
[ 291.125266] [<c104faf4>] __raw_notifier_call_chain+0xe/0x10
[ 291.125278] [<c12e6cad>] _cpu_down+0x7a/0x235
[ 291.125291] [<c1035a85>] disable_nonboot_cpus+0x58/0xe0
[ 291.125305] [<c1066e90>] suspend_devices_and_enter+0xc1/0x19d
[ 291.125318] [<c1067034>] enter_state+0xc8/0x114
[ 291.125330] [<c1066899>] state_store+0x93/0xa7
[ 291.125342] [<c1066806>] ? state_store+0x0/0xa7
[ 291.125355] [<c1165ad5>] kobj_attr_store+0x16/0x22
[ 291.125368] [<c10f696b>] sysfs_write_file+0xc0/0xeb
[ 291.125381] [<c10f68ab>] ? sysfs_write_file+0x0/0xeb
[ 291.125394] [<c10b749e>] vfs_write+0x80/0xdf
[ 291.125407] [<c10b7591>] sys_write+0x3b/0x5d
[ 291.125420] [<c10031e3>] sysenter_do_call+0x12/0x3c
[ 291.228068] CPU 1 is now offline
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists