[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47A39471.4010105@redhat.com>
Date: Fri, 01 Feb 2008 16:51:45 -0500
From: Peter Staubach <staubach@...hat.com>
To: Miklos Szeredi <miklos@...redi.hu>
CC: linux-kernel@...r.kernel.org, linux-nfs@...r.kernel.org,
akpm@...ux-foundation.org, trond.myklebust@....uio.no,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 2/3] enhanced syscall ESTALE error handling (v2)
Miklos Szeredi wrote:
> This doesn't apply to -mm, because the ro-mounts stuff touches a lot
> of the same places as this patch. You probably need to rebase this on
> top of those changes.
>
>
>> This patch adds handling for the error, ESTALE, to the system
>> calls which take pathnames as arguments. The algorithm used
>> is to detect that an ESTALE error has occurred during an
>> operation subsequent to the lookup process and then to unwind
>> appropriately and then to perform the lookup process again.
>> Eventually, either the lookup process will return an error
>> or a valid dentry/inode combination and then operation can
>> succeed or fail based on its own merits.
>>
>
> If a broken NFS server or FUSE filesysem keeps returning ESTALE, this
> goes into an infinite loop. How are we planning to deal with that?
>
>
Would you describe the situation that would cause the kernel to
go into an infinite loop, please?
Please note that, at least for NFS, this looping is interruptible
by the user, so the system can't hang without anything that can
be done.
> And it has to be dealt with either in the VFS, or in the kernel parts
> of the relevant filesystems. We can't just say, fix the broken
> servers, especially not with FUSE, where the server is totally
> untrusted.
Nope, certainly can't depend upon fixing servers. The client
should not depend upon the server to avoid things like looping.
Thanx...
ps
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists