lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140409175322.GZ18016@ZenIV.linux.org.uk>
Date:	Wed, 9 Apr 2014 18:53:23 +0100
From:	Al Viro <viro@...IV.linux.org.uk>
To:	"Eric W. Biederman" <ebiederm@...ssion.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Serge E. Hallyn" <serge@...lyn.com>,
	Linux-Fsdevel <linux-fsdevel@...r.kernel.org>,
	Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Andy Lutomirski <luto@...capital.net>,
	Rob Landley <rob@...dley.net>,
	Miklos Szeredi <miklos@...redi.hu>,
	Christoph Hellwig <hch@...radead.org>,
	Karel Zak <kzak@...hat.com>,
	"J. Bruce Fields" <bfields@...ldses.org>,
	Fengguang Wu <fengguang.wu@...el.com>
Subject: Re: [GIT PULL] Detaching mounts on unlink for 3.15-rc1

On Wed, Apr 09, 2014 at 10:32:14AM -0700, Eric W. Biederman wrote:

> For resolving a deeply nested symlink that hits the limit of 8 nested
> symlinks, I find 4688 bytes left on the stack.  Which means we use
> roughly 3504 bytes of stack when stating a deeply nested symlink.
> 
> For umount I had a little trouble measuring as typically the work done
> by umount was not the largest stack consumer, but I found for a small
> ext4 filesystem after the umount operation was complete there were
> 5152 bytes left on the stack, or umount used roughly 3040 bytes.

A bit less - we have a non-empty stack footprint from sys_umount() itself.

> 3504 + 3040 = 6544 bytes of stack used or 1684 bytes of stack left
> unused.  Which certainly isn't a lot of margin but it is not overflowing
> the kernel stack either. 
> 
> Is there a case that see where umount uses a lot more kernel stack?  Is
> your concern an architecture other than x86_64 with different
> limitations?

For starters, put that ext4 on top of dm-raid or dm-multipath.  That alone
will very likely push you over the top.

Keep in mind, BTW, that you do not have full 8K to play with - there's
struct thread_info that should not be stepped upon.  Not particulary large
(IIRC, restart_block is the largest piece in amd64 one), but it eats about
100 bytes.

I'd probably use renameat(2) in testing - i.e. trigger the shite when
resolving a deeply nested symlink in renameat() arguments.  That brings
extra struct nameidata into the game, i.e. extra 152 bytes chewed off the
stack.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ