[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45072E7A.2080701@gmail.com>
Date: Wed, 13 Sep 2006 00:02:11 +0159
From: Jiri Slaby <jirislaby@...il.com>
To: Michal Piotrowski <michal.k.k.piotrowski@...il.com>
CC: Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org,
"Rafael J. Wysocki" <rjw@...k.pl>, Pavel Machek <pavel@....cz>,
Dave Jones <davej@...hat.com>
Subject: Re: 2.6.18-rc6-mm2
Michal Piotrowski wrote:
> On 12/09/06, Andrew Morton <akpm@...l.org> wrote:
>>
>> ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.18-rc6/2.6.18-rc6-mm2/
>>
>>
>
> My FC6T2 bug appeared in -mm
> https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=202223
>
> Any ideas about this?
Hi,
this is known:
http://lkml.org/lkml/2006/9/8/56
> echo shutdown > /sys/power/disk; echo disk > /sys/power/state
>
> CPU 1 is now offline
> lockdep: not fixing up alternatives.
>
> =======================================================
> [ INFO: possible circular locking dependency detected ]
> 2.6.18-rc6-mm2 #120
> -------------------------------------------------------
> bash/1961 is trying to acquire lock:
> ((cpu_chain).rwsem){----}, at: [<c012e289>]
> blocking_notifier_call_chain+0x11/0x2d
>
> but task is already holding lock:
> (workqueue_mutex){--..}, at: [<c02f8156>] mutex_lock+0x1c/0x1f
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
> -> #1 (workqueue_mutex){--..}:
> [<c0138b97>] add_lock_to_list+0x5c/0x7a
> [<c013aca0>] __lock_acquire+0x9ec/0xae8
> [<c013b0fc>] lock_acquire+0x6b/0x88
> [<c02f7f1b>] __mutex_lock_slowpath+0xd6/0x2f5
> [<c02f8156>] mutex_lock+0x1c/0x1f
> [<c01312de>] workqueue_cpu_callback+0x109/0x1ff
> [<c012de43>] notifier_call_chain+0x20/0x31
> [<c012e295>] blocking_notifier_call_chain+0x1d/0x2d
> [<c013f867>] _cpu_down+0x48/0x207
> [<c013fbf8>] disable_nonboot_cpus+0x98/0x12c
> [<c01451d6>] prepare_processes+0xe/0x41
> [<c014539e>] pm_suspend_disk+0x9/0xe2
> [<c0144635>] enter_state+0x53/0x177
> [<c01447df>] state_store+0x86/0x9c
> [<c01abde4>] subsys_attr_store+0x20/0x25
> [<c01abee3>] sysfs_write_file+0xa6/0xcc
> [<c0174d60>] vfs_write+0xcd/0x174
> [<c01753fa>] sys_write+0x3b/0x71
> [<c0103156>] sysenter_past_esp+0x5f/0x99
> [<ffffffff>] 0xffffffff
> -> #0 ((cpu_chain).rwsem){----}:
> [<c013a280>] print_circular_bug_tail+0x2e/0x62
> [<c013abd7>] __lock_acquire+0x923/0xae8
> [<c013b0fc>] lock_acquire+0x6b/0x88
> [<c0136e60>] down_read+0x28/0x3b
> [<c012e289>] blocking_notifier_call_chain+0x11/0x2d
> [<c013f98e>] _cpu_down+0x16f/0x207
> [<c013fbf8>] disable_nonboot_cpus+0x98/0x12c
> [<c01451d6>] prepare_processes+0xe/0x41
> [<c014539e>] pm_suspend_disk+0x9/0xe2
> [<c0144635>] enter_state+0x53/0x177
> [<c01447df>] state_store+0x86/0x9c
> [<c01abde4>] subsys_attr_store+0x20/0x25
> [<c01abee3>] sysfs_write_file+0xa6/0xcc
> [<c0174d60>] vfs_write+0xcd/0x174
> [<c01753fa>] sys_write+0x3b/0x71
> [<c0103156>] sysenter_past_esp+0x5f/0x99
> [<ffffffff>] 0xffffffff
>
> other info that might help us debug this:
>
> 2 locks held by bash/1961:
> #0: (cpu_add_remove_lock){--..}, at: [<c02f8156>] mutex_lock+0x1c/0x1f
> #1: (workqueue_mutex){--..}, at: [<c02f8156>] mutex_lock+0x1c/0x1f
>
> stack backtrace:
> [<c01041ba>] dump_trace+0x63/0x1ca
> [<c0104333>] show_trace_log_lvl+0x12/0x25
> [<c0104993>] show_trace+0xd/0x10
> [<c0104a58>] dump_stack+0x16/0x18
> [<c013a2a9>] print_circular_bug_tail+0x57/0x62
> [<c013abd7>] __lock_acquire+0x923/0xae8
> [<c013b0fc>] lock_acquire+0x6b/0x88
> [<c0136e60>] down_read+0x28/0x3b
> [<c012e289>] blocking_notifier_call_chain+0x11/0x2d
> [<c013f98e>] _cpu_down+0x16f/0x207
> [<c013fbf8>] disable_nonboot_cpus+0x98/0x12c
> [<c01451d6>] prepare_processes+0xe/0x41
> [<c014539e>] pm_suspend_disk+0x9/0xe2
> [<c0144635>] enter_state+0x53/0x177
> [<c01447df>] state_store+0x86/0x9c
> [<c01abde4>] subsys_attr_store+0x20/0x25
> [<c01abee3>] sysfs_write_file+0xa6/0xcc
> [<c0174d60>] vfs_write+0xcd/0x174
> [<c01753fa>] sys_write+0x3b/0x71
> [<c0103156>] sysenter_past_esp+0x5f/0x99
> DWARF2 unwinder stuck at sysenter_past_esp+0x5f/0x99
>
> Leftover inexact backtrace:
>
> l *blocking_notifier_call_chain+0x11/0x2d
> 0xc012e278 is in blocking_notifier_call_chain
> (/usr/src/linux-mm/kernel/sys.c:324).
> 319 * of the last notifier function called.
> 320 */
> 321
> 322 int blocking_notifier_call_chain(struct blocking_notifier_head *nh,
> 323 unsigned long val, void *v)
> 324 {
> 325 int ret;
> 326
> 327 down_read(&nh->rwsem);
> 328 ret = notifier_call_chain(&nh->head, val, v);
>
> l *mutex_lock+0x1c/0x1f
> 0xc02f813a is in mutex_lock (/usr/src/linux-mm/kernel/mutex.c:85).
> 80 * deadlock debugging. )
> 81 *
> 82 * This function is similar to (but not equivalent to) down().
> 83 */
> 84 void inline fastcall __sched mutex_lock(struct mutex *lock)
> 85 {
> 86 might_sleep();
> 87 /*
> 88 * The locking fastpath is the 1->0 transition from
> 89 * 'unlocked' into 'locked' state.
>
> http://www.stardust.webpages.pl/files/mm/2.6.18-rc6-mm2/mm-dmesg2
> http://www.stardust.webpages.pl/files/mm/2.6.18-rc6-mm2/mm-config1
regards,
--
http://www.fi.muni.cz/~xslaby/ Jiri Slaby
faculty of informatics, masaryk university, brno, cz
e-mail: jirislaby gmail com, gpg pubkey fingerprint:
B674 9967 0407 CE62 ACC8 22A0 32CC 55C3 39D4 7A7E
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists