[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080212152846.GC3078@elte.hu>
Date: Tue, 12 Feb 2008 16:28:46 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Andi Kleen <andi@...stfloor.org>
Cc: linux-kernel@...r.kernel.org, "Frank Ch. Eigler" <fche@...hat.com>,
Roland McGrath <roland@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [git pull] kgdb-light -v10
* Andi Kleen <andi@...stfloor.org> wrote:
> > do spinning for now: we dont _ever_ want to break a correctly
> > working system with kgdb.
>
> Stopping all CPUs for indefinite time very much seems like "breaking a
> correctly working system" to me. [...]
well, this is a small detail, but still you are wrong, and on a
correctly working system this will not occur. (if yes, tell me how)
KGDB does a very straightforward "all CPUs enter controlled state"
transition when the session begins, and at the end an "all CPUs
continue" transition.
I'm not sure what you mean exactly under "stopping all CPUs for
indefinite amount of time" (your statement is sufficiently vague to be
hard to counter via specifics) - that does not happen, unless the system
is so buggy that a CPU is not able to process an NMI anymore [which is
rather rare] - in that case the whole system is likely locked up anyway.
In that case the simplest and safest behavior is what kgdb-light does
currently: it will only proceed if all CPUs have responded. Note that
you are wrong to suggest that "KGDB locks up", the system _has already
locked up_.
yes, we could "time out" and force a KGDB session even if some CPUs do
not respond. But it's obviously not a completely safe system state,
because other CPUs might be changing things under the feet of the
debugger. So the safest first-level approach is to not enter the
debugger in this case.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists