[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101208152231.GD31703@redhat.com>
Date: Wed, 8 Dec 2010 10:22:31 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Don Zickus <dzickus@...hat.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Yinghai Lu <yinghai@...nel.org>, Ingo Molnar <mingo@...e.hu>,
Jason Wessel <jason.wessel@...driver.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Haren Myneni <hbabu@...ibm.com>
Subject: Re: perf hw in kexeced kernel broken in tip
On Wed, Dec 08, 2010 at 04:15:07PM +0100, Peter Zijlstra wrote:
> On Wed, 2010-12-08 at 10:02 -0500, Vivek Goyal wrote:
>
> > >but its kdump so its mostly broken by design anyway ;-)
> >
> > Kdump has its share of problems especially with the fact that
> > kernel/drivers find devices in bad state and are not hardened enough
> > to deal with that. But on bare metal what's the better way of capturing
> > kernel crash dump? Trying to do anything post crash in the kernel is
> > also not very reliable either.
>
> /me <3 RS-232
>
> I haven't found anything better than that...
Serial is good for getting the oops out. But for the big vmcore? Secondly,
people want the flexibility of sending the vmcore over various targets
like over network to some remote server. Booting into second kernel opens
up all those options and now one can do intelligent filtering and send
the vmcore to any kind of destination.
>
> And poking at the RS-232 requires less of the kernel to be functional
> than booting into a new kernel (whose image might have been corrupted by
> the dying kernel, etc..)
New kernel image being corrupted problem can be solved up to great extent
by write protecting that memory location.
So those who are happy with RS-232, they don't have to configure kdump.
Just connect serial console and get the oops message out.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists