lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 12 Aug 2014 09:06:21 -0700
From:	"Paul E. McKenney" <>
To:	Amit Shah <>
Subject: Re: [PATCH tip/core/rcu 1/2] rcu: Parallelize and economize NOCB
 kthread wakeups

On Tue, Aug 12, 2014 at 11:03:21AM +0530, Amit Shah wrote:
> On (Mon) 11 Aug 2014 [20:45:31], Paul E. McKenney wrote:

[ . . . ]

> > > That is a bit surprising.  Is it possible that the system is OOMing
> > > quickly due to grace periods not proceeding?  If so, maybe giving the
> > > VM more memory would help.
> > 
> > Oh, and it is necessary to build the kernel with CONFIG_RCU_TRACE=y
> > for the rcu_nocb_wake trace events to be enabled in the first place.
> > I am assuming that your kernel was built with CONFIG_MAGIC_SYSRQ=y.
> Yes, it is :-)  I checked the rcu_nocb_poll cmdline option does indeed
> dump all the ftrace buffers to dmesg.

Good.  ;-)

> > If all of that is in place and no joy, is it possible to extract the
> > ftrace buffer from the running/hung guest?  It should be in there
> > somewhere!  ;-)
> I know of only virtio-console doing this (via userspace only,
> though).

As in userspace within the guest?  That would not work.  The userspace
that the qemu is running in might.  There is a way to extract ftrace info
from crash dumps, so one approach would be "sendkey alt-sysrq-c", then
pull the buffer from the resulting dump.  For all I know, there might also
be some script that uses the qemu "x" command to get at the ftrace buffer.

Again, I cannot reproduce this, and I have been through the code several
times over the past few days, and am not seeing it.  I could start
sending you random diagnostic patches, but it would be much better if
we could get the trace data from the failure.

							Thanx, Paul

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists