[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160701042758.GB32617@sejong>
Date: Fri, 1 Jul 2016 13:27:58 +0900
From: Namhyung Kim <namhyung@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
CC: <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Minchan Kim <minchan@...nel.org>
Subject: Re: [QUESTION] Is there a better way to get ftrace dump on guest?
On Wed, Jun 29, 2016 at 09:52:31AM +0900, Namhyung Kim wrote:
> Hi Steve,
>
> On Tue, Jun 28, 2016 at 09:57:27AM -0400, Steven Rostedt wrote:
> > On Tue, 28 Jun 2016 15:33:18 +0900
> > Namhyung Kim <namhyung@...nel.org> wrote:
> >
> > > Send again to correct addresses, sorry!
> > >
> > > On Tue, Jun 28, 2016 at 3:25 PM, Namhyung Kim <namhyung@...nel.org> wrote:
> > > > Hello,
> > > >
> > > > I'm running some guest machines for kernel development. For debugging
> > > > purpose, I use lots of trace_printk() since it's faster than normal
> > > > printk(). When kernel crash happens the trace buffer is printed on
> > > > console (I set ftrace_dump_on_oops) but it takes too much time. I
> > > > don't want to reduce the size of ring buffer as I want to collect the
> > > > debug info as much as possible. And I also want to see trace from all
> > > > cpu so 'ftrace_dump_on_oop = 2' is not an option.
> > > >
> > > > I know the kexec/kdump (and the crash tool) can dump and analyze the
> > > > trace buffer later. But it's cumbersome to do it everytime and more
> > > > importantly, I don't want to spend the memory for the crashkernel.
> > > >
> > > > So what is the best way to handle this? I'd like to know how others
> > > > setup the debugging environment..
> >
> > Heh, I'd say something helpful but you basically already shot down all
> > of my advice, because what I do is...
> >
> > 1) Reduce the size of the ring buffer
> >
> > 2) Dump out just one CPU
> >
> > 3) use kexec/kdump and make a crash kernel to extract trace.dat from
> >
> >
> > That's my debugging environment, but it looks like you want something
> > else.
>
> Thanks for sharing. Yeah, I'd like to know other ways to overcome
> this if possible. Since I don't have enough knowledge about this
> area, I hope others would have better idea. :)
Now I'm thinking about extending the pstore subsystem. AFAICS it's
the best fit for my use case. While it only supports function tracer
with a dedicated ftrace_ops now, it can be used for ftrace dump IMHO.
Does it make sense to add a virtio pstore driver and saves the dump to
files on host?
Thanks,
Namhyung
Powered by blists - more mailing lists