[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160705022556.GK14480@ZenIV.linux.org.uk>
Date: Tue, 5 Jul 2016 03:25:56 +0100
From: Al Viro <viro@...IV.linux.org.uk>
To: Oleg Drokin <green@...uxhacker.ru>
Cc: Mailing List <linux-kernel@...r.kernel.org>,
"<linux-fsdevel@...r.kernel.org>" <linux-fsdevel@...r.kernel.org>
Subject: Re: More parallel atomic_open/d_splice_alias fun with NFS and
possibly more FSes.
On Sun, Jul 03, 2016 at 11:55:09PM -0400, Oleg Drokin wrote:
> Quite a bit, actually. If you connect to an rogue Lustre server,
> currently there are many ways it can crash the client.
> I suspect this is true not just of Lustre, if e.g. NFS server starts to
> send directory inodes with duplicated inode numbers or some such,
> VFS would not be super happy about such "hardlinked" directories either.
> This is before we even consider that it can feed you garbage data
> to crash your apps (or substitute binaries to do something else).
NFS client is at least supposed to try to be resistant to that. As in,
"if an 0wn3d NFS server can be escalated to buggered client, it's a bug in
client and we are expected to try and fix it".
[snip]
> Thanks, I'll give this a try.
BTW, could you take a look at
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs.git#sendmsg.lustre?
It's a bunch of simplifications that became possible once sendmsg()/recvmsg()
switched to iov_iter, stopped mangling the iovecs and went for predictable
behaviour re advancing the iterator.
Powered by blists - more mailing lists