lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 26 Apr 2010 19:28:56 -0400
From:	Trond Myklebust <Trond.Myklebust@...app.com>
To:	Robert Wimmer <kernel@...ceti.net>
Cc:	"Michael S. Tsirkin" <mst@...hat.com>, Avi Kivity <avi@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	bugzilla-daemon@...zilla.kernel.org,
	Rusty Russell <rusty@...tcorp.com.au>,
	Mel Gorman <mel@....ul.ie>, linux-nfs@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [Bugme-new] [Bug 15709] New: swapper page allocation failure

On Tue, 2010-04-27 at 00:18 +0200, Robert Wimmer wrote: 
> > Sure. In addition to what you did above, please do
> >
> > mount -t debugfs none /sys/kernel/debug
> >
> > and then cat the contents of the pseudofile at
> >
> > /sys/kernel/debug/tracing/stack_trace
> >
> > Please do this more or less immediately after you've finished mounting
> > the NFSv4 client.
> >   
> 
> I've uploaded the stack trace. It was generated
> directly after mounting. Here are the stacks:
> 
> After mounting:
> https://bugzilla.kernel.org/attachment.cgi?id=26153
> After the soft lockup:
> https://bugzilla.kernel.org/attachment.cgi?id=26154
> The dmesg output of the soft lockup:
> https://bugzilla.kernel.org/attachment.cgi?id=26155
> 
> > Does your server have the 'crossmnt' or 'nohide' flags set, or does it
> > use the 'refer' export option anywhere? If so, then we might have to
> > test further, since those may trigger the NFSv4 submount feature.
> >   
> The server has the following settings:
> rw,nohide,insecure,async,no_subtree_check,no_root_squash
> 
> Thanks!
> Robert
> 
> 

That second trace is more than 5.5K deep, more than half of which is
socket overhead :-(((.

The process stack does not appear to have overflowed, however that trace
doesn't include any IRQ stack overhead.

OK... So what happens if we get rid of half of that trace by forcing
asynchronous tasks such as this to run entirely in rpciod instead of
first trying to run in the process context?

See the attachment...

View attachment "linux-2.6.34-000-reduce_async_rpc_stack_usage.dif" of type "text/plain" (856 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ