[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0805091231430.18088@blonde.site>
Date: Fri, 9 May 2008 12:36:00 +0100 (BST)
From: Hugh Dickins <hugh@...itas.com>
To: Peter Zijlstra <peterz@...radead.org>
cc: Theodore Tso <tytso@....EDU>, Glauber Costa <gcosta@...hat.com>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
Venki Pallipadi <venkatesh.pallipadi@...el.com>
Subject: Re: Possible regression? 2.6.26-rc1: T61s failure after suspend/resume
On Fri, 9 May 2008, Peter Zijlstra wrote:
> On Thu, 2008-05-08 at 22:59 -0400, Theodore Tso wrote:
>
> > [ 2.097919] =================================
> > [ 2.098059] [ INFO: inconsistent lock state ]
> > [ 2.098136] 2.6.25-numa-04462-g10c993a #23
> > [ 2.098209] ---------------------------------
> > [ 2.098284] inconsistent {in-hardirq-W} -> {hardirq-on-W} usage.
> > [ 2.098359] swapper/0 [HC0[0]:SC0[0]:HE1:SE1] takes:
> > [ 2.098434] (&rq->rq_lock_key){++..}, at: [sched_clock_idle_wakeup_event+67/116] sched_clock_idle_wakeup_event+0x43/0x74
> > [ 2.098753] {in-hardirq-W} state was registered at:
> > [ 2.098833] [__lock_acquire+1023/2834] __lock_acquire+0x3ff/0xb12
> > [ 2.099082] [lock_acquire+106/144] lock_acquire+0x6a/0x90
> > [ 2.099329] [_spin_lock+28/73] _spin_lock+0x1c/0x49
> > [ 2.099590] [scheduler_tick+67/443] scheduler_tick+0x43/0x1bb
> > [ 2.099836] [update_process_times+61/73] update_process_times+0x3d/0x49
> > [ 2.100083] [tick_periodic+102/114] tick_periodic+0x66/0x72
> > [ 2.100327] [tick_handle_periodic+25/106] tick_handle_periodic+0x19/0x6a
> > [ 2.100574] [timer_interrupt+72/115] timer_interrupt+0x48/0x73
> > [ 2.100822] [handle_IRQ_event+26/79] handle_IRQ_event+0x1a/0x4f
> > [ 2.101064] [handle_level_irq+127/202] handle_level_irq+0x7f/0xca
> > [ 2.101316] [do_IRQ+169/210] do_IRQ+0xa9/0xd2
> > [ 2.101563] [<ffffffff>] 0xffffffff
> > [ 2.101806] irq event stamp: 1772935
> > [ 2.101883] hardirqs last enabled at (1772935): [native_sched_clock+231/255] native_sched_clock+0xe7/0xff
> > [ 2.102091] hardirqs last disabled at (1772934): [native_sched_clock+109/255] native_sched_clock+0x6d/0xff
> > [ 2.102298] softirqs last enabled at (1772496): [__do_softirq+249/255] __do_softirq+0xf9/0xff
> > [ 2.102501] softirqs last disabled at (1772491): [do_softirq+113/206] do_softirq+0x71/0xce
> > [ 2.102708]
> > [ 2.102709] other info that might help us debug this:
> > [ 2.102850] no locks held by swapper/0.
> > [ 2.102923]
> > [ 2.102924] stack backtrace:
> > [ 2.103067] Pid: 0, comm: swapper Not tainted 2.6.25-numa-04462-g10c993a #23
> > [ 2.103145] [print_usage_bug+263/276] print_usage_bug+0x107/0x114
> > [ 2.103278] [mark_lock+491/924] mark_lock+0x1eb/0x39c
> > [ 2.103417] [__lock_acquire+1140/2834] __lock_acquire+0x474/0xb12
> > [ 2.103547] [restore_nocheck+18/21] ? restore_nocheck+0x12/0x15
> > [ 2.103740] [native_sched_clock+231/255] ? native_sched_clock+0xe7/0xff
> > [ 2.103929] [lock_acquire+106/144] lock_acquire+0x6a/0x90
> > [ 2.104065] [sched_clock_idle_wakeup_event+67/116] ? sched_clock_idle_wakeup_event+0x43/0x74
> > [ 2.104255] [_spin_lock+28/73] _spin_lock+0x1c/0x49
> > [ 2.104392] [sched_clock_idle_wakeup_event+67/116] ? sched_clock_idle_wakeup_event+0x43/0x74
> > [ 2.104582] [sched_clock_idle_wakeup_event+67/116] sched_clock_idle_wakeup_event+0x43/0x74
> > [ 2.104715] [<f886c3b3>] acpi_idle_enter_simple+0x19a/0x21b [processor]
> > [ 2.104858] [<f886bfa5>] acpi_idle_enter_bm+0xbe/0x332 [processor]
> > [ 2.104999] [cpuidle_idle_call+99/143] cpuidle_idle_call+0x63/0x8f
> > [ 2.105135] [cpuidle_idle_call+0/143] ? cpuidle_idle_call+0x0/0x8f
> > [ 2.105330] [cpu_idle+182/214] cpu_idle+0xb6/0xd6
> > [ 2.105463] [rest_init+73/75] rest_init+0x49/0x4b
> > [ 2.105596] =======================
>
> That's not good...
>
> But I'm failing to see how IRQs get enabled between
> sched_clock_idle_sleep_event() and sched_clock_idle_wakeup_event() in
> acpi_idle_enter_simple().
If this point in Ted's bisection was without your idle irq fixes
(that patch that added trace_hardirqs_on to __sti_mwait, amongst
other things), would that account for it?
Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists