[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140812160621.GC4752@linux.vnet.ibm.com>
Date: Tue, 12 Aug 2014 09:06:21 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Amit Shah <amit.shah@...hat.com>
Cc: linux-kernel@...r.kernel.org, riel@...hat.com, mingo@...nel.org,
laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
peterz@...radead.org, rostedt@...dmis.org, dhowells@...hat.com,
edumazet@...gle.com, dvhart@...ux.intel.com, fweisbec@...il.com,
oleg@...hat.com, sbw@....edu
Subject: Re: [PATCH tip/core/rcu 1/2] rcu: Parallelize and economize NOCB
kthread wakeups
On Tue, Aug 12, 2014 at 11:03:21AM +0530, Amit Shah wrote:
> On (Mon) 11 Aug 2014 [20:45:31], Paul E. McKenney wrote:
[ . . . ]
> > > That is a bit surprising. Is it possible that the system is OOMing
> > > quickly due to grace periods not proceeding? If so, maybe giving the
> > > VM more memory would help.
> >
> > Oh, and it is necessary to build the kernel with CONFIG_RCU_TRACE=y
> > for the rcu_nocb_wake trace events to be enabled in the first place.
> > I am assuming that your kernel was built with CONFIG_MAGIC_SYSRQ=y.
>
> Yes, it is :-) I checked the rcu_nocb_poll cmdline option does indeed
> dump all the ftrace buffers to dmesg.
Good. ;-)
> > If all of that is in place and no joy, is it possible to extract the
> > ftrace buffer from the running/hung guest? It should be in there
> > somewhere! ;-)
>
> I know of only virtio-console doing this (via userspace only,
> though).
As in userspace within the guest? That would not work. The userspace
that the qemu is running in might. There is a way to extract ftrace info
from crash dumps, so one approach would be "sendkey alt-sysrq-c", then
pull the buffer from the resulting dump. For all I know, there might also
be some script that uses the qemu "x" command to get at the ftrace buffer.
Again, I cannot reproduce this, and I have been through the code several
times over the past few days, and am not seeing it. I could start
sending you random diagnostic patches, but it would be much better if
we could get the trace data from the failure.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists