[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070616103707.GA28096@elte.hu>
Date: Sat, 16 Jun 2007 12:37:07 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Miklos Szeredi <miklos@...redi.hu>
Cc: chris@...ee.ca, linux-kernel@...r.kernel.org, tglx@...utronix.de,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [BUG] long freezes on thinkpad t60
* Miklos Szeredi <miklos@...redi.hu> wrote:
> I've got some more info about this bug. It is gathered with
> nmi_watchdog=2 and a modified nmi_watchdog_tick(), which instead of
> calling die_nmi() just prints a line and calls show_registers().
great!
> The pattern that emerges is that on CPU0 we have an interrupt, which
> is trying to acquire the rq lock, but can't.
>
> On CPU1 we have strace which is doing wait_task_inactive(), which sort
> of spins acquiring and releasing the rq lock. I've checked some of
> the traces and it is just before acquiring the rq lock, or just after
> releasing it, but is not actually holding it.
>
> So is it possible that wait_task_inactive() could be starving the
> other waiters of the rq spinlock? Any ideas?
hm, this is really interesting, and indeed a smoking gun. The T60 has a
Core2Duo and i've _never_ seen MESI starvation happen on dual-core
single-socket CPUs! (The only known serious MESI starvation i know about
is on multi-socket Opterons: there the trylock loop of spinlock
debugging is known to starve some CPUs out of those locks that are being
polled, so we had to turn off that aspect of spinlock debugging.)
wait_task_inactive(), although it busy-loops, is pretty robust: it does
a proper spin-lock/spin-unlock sequence and has a cpu_relax() inbetween.
Furthermore, the rep_nop() that cpu_relax() is based on is
unconditional, so it's not like we could somehow end up not having the
REP; NOP sequence there (which should make the lock polling even more
fair)
could you try the quick hack below, ontop of cfs-v17? It adds two things
to wait_task_inactive():
- a cond_resched() [in case you are running !PREEMPT]
- use MONITOR+MWAIT to monitor memory transactions to the rq->curr
cacheline. This should make the polling loop definitely fair.
If this solves the problem on your box then i'll do a proper fix and
introduce a cpu_relax_memory_change(*addr) type of API to around
monitor/mwait. This patch boots fine on my T60 - but i never saw your
problem.
[ btw., utrace IIRC fixes ptrace to get rid of wait_task_interactive(). ]
Ingo
Index: linux/kernel/sched.c
===================================================================
--- linux.orig/kernel/sched.c
+++ linux/kernel/sched.c
@@ -834,6 +834,16 @@ repeat:
cpu_relax();
if (preempted)
yield();
+ else
+ cond_resched();
+ /*
+ * Wait for "curr" to change:
+ */
+ __monitor((void *)&rq->curr, 0, 0);
+ smp_mb();
+ if (rq->curr != p)
+ __mwait(0, 0);
+
goto repeat;
}
task_rq_unlock(rq, &flags);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists