lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 25 May 2010 00:01:09 +0100
From:	Al Viro <viro@...IV.linux.org.uk>
To:	Trond Myklebust <trond.myklebust@....uio.no>
Cc:	Neil Brown <neilb@...e.de>,
	"Dr. J. Bruce Fields" <bfields@...ldses.org>,
	Chuck Lever <chuck.lever@...cle.com>,
	linux-nfs@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] VFS: fix recent breakage of FS_REVAL_DOT

On Mon, May 24, 2010 at 05:13:32PM -0400, Trond Myklebust wrote:

> Sorry... I misunderstood you.
> 
> In cases like the above, then the default behaviour of the server would
> be to assign the same filehandles to those mount points. The
> administrator can, however, make them different by choosing to use the
> 'fsid' mount option to manually assign different fsids to the different
> export points.
> 
> If not, then the client will automatically group these things in the
> same superblock, so like the server, it too is supposed to share the
> same inode for these different objects. It will then use
> d_obtain_alias() to get a root dentry for that inode (see
> nfs4_get_root()).

Yes, it will.  So what will happen in nfs_follow_referral()?  Note that
we check the rootpath returned by the server (whatever it will end up
being) against the mnt_devname + relative path from mnt_root to referral
point.  In this case it'll be /a/z or /b/z (depending on which export
will server select when it sees the fsid) vs /a/z/x or /b/z/x (depending
on which one does client walk into).  And the calls of nfs4_proc_fs_locations()
will get identical arguments whether client walks into a/z/x or b/z/x.
So will the actual RPC requests seen by the server, so it looks like in
at least one of those cases we will get the rootpath that is _not_ a prefix
we are expecting, stepping into
        if (strncmp(path, fs_path, strlen(fs_path)) != 0) {
                dprintk("%s: path %s does not begin with fsroot %s\n",
                        __func__, path, fs_path);
                return -ENOENT;
        }
in nfs4_validate_fspath().

Question regarding RFC3530: is it actually allowed to have the same fhandle
show up in two different locations in server's namespace?  If so, what
should GETATTR with FS_LOCATIONS return for it?

Client question: what stops you from stack overflows in that area?  Call
chains you've got are *deep*, and I really wonder what happens if you
hit a referral point while traversing nested symlink, get pathname
resolution (already several levels into recursion) call ->follow_link(),
bounce down through nfs_do_refmount/nfs_follow_referral/try_location/
vfs_kern_mount/nfs4_referral_get_sb/nfs_follow_remote_path into
vfs_path_lookup, which will cheerfully add a few more loops like that.

Sure, the *total* nesting depth through symlinks is still limited by 8, but
that pile of stack frames is _MUCH_ fatter than what we normally have in
pathname resolution.  You've suddenly added ~60 extra stack frames to the
worst-case stack footprint of the pathname resolution.  Don't try that
on sparc64, boys and girls, it won't be happy with attempt to carve ~12Kb
extra out of its kernel stack...  In fact, it's worse than just ~60 stack
frames - several will contain (on-stack) struct nameidata in them, which
very definitely will _not_ fit into the minimal stack frame.  It's about
160 bytes extra, for each of those (up to 7).

Come to think of that, x86 variants might get rather upset about that kind
of treatment as well.  Minimal stack frames are smaller, but so's the stack...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ