[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b647ffbd0807111642v4e8edf87t62c4ee82d86e332b@mail.gmail.com>
Date: Sat, 12 Jul 2008 01:42:51 +0200
From: "Dmitry Adamushko" <dmitry.adamushko@...il.com>
To: "Vegard Nossum" <vegard.nossum@...il.com>
Cc: Yanmin <yanmin_zhang@...ux.intel.com>,
"Rusty Russell" <rusty@...tcorp.com.au>,
"Ingo Molnar" <mingo@...e.hu>,
"Peter Zijlstra" <a.p.zijlstra@...llo.nl>,
"Dhaval Giani" <dhaval@...ux.vnet.ibm.com>,
"Gautham R Shenoy" <ego@...ibm.com>,
"Heiko Carstens" <heiko.carstens@...ibm.com>, miaox@...fujitsu.com,
"Lai Jiangshan" <laijs@...fujitsu.com>,
"Avi Kivity" <avi@...ranet.com>, linux-kernel@...r.kernel.org
Subject: Re: v2.6.26-rc9: kernel BUG at kernel/sched.c:5858!
2008/7/11 Vegard Nossum <vegard.nossum@...il.com>:
> On Fri, Jul 11, 2008 at 1:04 PM, Vegard Nossum <vegard.nossum@...il.com> wrote:
>> On Fri, Jul 11, 2008 at 11:02 AM, Dmitry Adamushko
>> <dmitry.adamushko@...il.com> wrote:
>>> Vegard,
>>>
>>>
>>> regarding the first crash. Would you please run your test with the
>>> following debugging patch and let me know its output?
>>>
>>> The apperance of " * [ pid ] comm (name), orig_cpu() ... " means we
>>> hit a problematic case (with Miao Xie's patch it shouldn't crash).
>>>
>>> I see that you have CONFIG_SCHED_DEBUG=y so I'm also interested in
>>> messages from sched_domain_debug() - "CPU# attaching ...". IOW, all
>>> the kernel messages appearing while a cpu is going down and up.
> [...]
>
>> Ok, now I tested it on my laptop (sorry, no serial console :-)) and I
>
> Now I tested using serial console, but nothing new:
>
> CPU0 attaching NULL sched-domain.
> CPU1 attaching NULL sched-domain.
> CPU0 attaching sched-domain:
> domain 0: span 0-1
> groups: 0 1
> domain 1: span 0-1
> groups: 0-1
> CPU1 attaching sched-domain:
> domain 0: span 0-1
> groups: 1 0
> domain 1: span 0-1
> groups: 0-1
hmm, sched-domains have been rebuilt too early. The soon-to-be-offline
cpu #1 is included (as it's still in cpu_online_map presumably).
> * [ 7 ] comm (ksoftirqd/1), orig_cpu (1), dst_cpu (1), cpu (1)
Have you removed "__migrate_dead ... " printk messages? This one
should be printed after __stop_machine_run(take_cpu_down, ...) and
before migrate_live_tasks() takes place... so we would have seen
"__migrate_dead..." message for ksoftirqd/1 a bit later, I guess.
> CPU 1 is now offline
migrate_live_tasks() should take place here...
> * [ 1228 ] comm (kjournald), orig_cpu (0), dst_cpu (0), cpu (0)
> * [ 3113 ] comm (klogd), orig_cpu (0), dst_cpu (0), cpu (0)
I guess, these were migrated onto cpu#0 by migrate_live_tasks() but
now try_to_wake_up() has been called for them. Due to the fact that
cpu#1 is visible on the sched-domains, the load-balancer
(select_task_rq()) picks it up erronneusly... bum.
>
> Vegard
>
--
Best regards,
Dmitry Adamushko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists