[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070718071852.GB1516@elte.hu>
Date: Wed, 18 Jul 2007 09:18:52 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Fernando Lopez-Lezcano <nando@...ma.Stanford.EDU>
Cc: Gabriel C <nix.or.die@...glemail.com>,
Carsten Emde <carsten.emde@...dl.org>,
"jcaceres@...ma.Stanford.EDU" <jcaceres@...ma.Stanford.EDU>,
Steven Rostedt <rostedt@...dmis.org>,
RT-Users <linux-rt-users@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Rui Nuno Capela <rncbc@...bc.org>,
"Paul E. McKenney" <paulmck@...ibm.com>
Subject: Re: v2.6.21.5-rt19 (sched_getaffinity?)
* Fernando Lopez-Lezcano <nando@...ma.Stanford.EDU> wrote:
> > does lockdep pinpoint anything?
>
> Lots of stuff, and at the end the lock report for the problem.
> Hopefully some of this will help... I have attached the whole bootup
> sequence as logged in /var/log/messages.
yeah, it pinpointed the bug. It seems to be an interaction between
RCU-preempt (Paul Cc:-ed) and sched_mc_power_savings_store():
detach_destroy_domains() uses synchronize_sched() which uses
getaffinity, which takes sched_hotcpu_mutex, and
arch_reinit_sched_domains does it too - see the lockdep report below.
I've added a quick workaround below as well, which should keep your box
from hanging.
Ingo
=============================================
[ INFO: possible recursive locking detected ]
[ 2.6.22-0182.rt4.3.fc7.ccrmart #1
---------------------------------------------
sched-powersave/3251 is trying to acquire lock:
(sched_hotcpu_mutex){--..}, at: [<c0424a37>] sched_getaffinity+0x14/0x94
but task is already holding lock:
(sched_hotcpu_mutex){--..}, at: [<c04245a5>] arch_reinit_sched_domains+0xe/0x33
other info that might help us debug this:
1 lock held by sched-powersave/3251:
#0: (sched_hotcpu_mutex){--..}, at: [<c04245a5>] arch_reinit_sched_domains+0xe/0x33
stack backtrace:
[<c040600c>] show_trace_log_lvl+0x1a/0x2f
[<c0406ae8>] show_trace+0x12/0x14
[<c0406b50>] dump_stack+0x16/0x18
[<c0446f46>] __lock_acquire+0x172/0xb67
[<c0447d03>] lock_acquire+0x56/0x6f
[<c061d414>] _mutex_lock+0x2b/0x38
[<c0424a37>] sched_getaffinity+0x14/0x94
[<c0460841>] __synchronize_sched+0x11/0x5f
[<c0423fa8>] detach_destroy_domains+0x2c/0x30
[<c04245af>] arch_reinit_sched_domains+0x18/0x33
[<c0424606>] sched_power_savings_store+0x3c/0x49
[<c0424634>] sched_mc_power_savings_store+0xe/0x10
[<c0561f11>] sysdev_class_store+0x20/0x25
[<c04bbc6c>] sysfs_write_file+0xaf/0xd0
[<c048183c>] vfs_write+0xaf/0x163
[<c0481e8a>] sys_write+0x3d/0x61
[<c040501a>] syscall_call+0x7/0xb
=======================
thinkpad_acpi: ThinkPad ACPI Extras v0.14
--------------------->
Index: linux-rt.q/kernel/sched.c
===================================================================
--- linux-rt.q.orig/kernel/sched.c
+++ linux-rt.q/kernel/sched.c
@@ -6699,7 +6699,6 @@ static void detach_destroy_domains(const
for_each_cpu_mask(i, *cpu_map)
cpu_attach_domain(NULL, i);
- synchronize_sched();
arch_destroy_sched_domains(cpu_map);
}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists