[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45A4E457.7020403@panasas.com>
Date: Wed, 10 Jan 2007 15:04:23 +0200
From: Benny Halevy <bhalevy@...asas.com>
To: Benny Halevy <bhalevy@...asas.com>,
Trond Myklebust <trond.myklebust@....uio.no>,
Jan Harkes <jaharkes@...cmu.edu>,
Miklos Szeredi <miklos@...redi.hu>, nfsv4@...f.org,
linux-kernel@...r.kernel.org,
Mikulas Patocka <mikulas@...ax.karlin.mff.cuni.cz>,
linux-fsdevel@...r.kernel.org,
Jeff Layton <jlayton@...chiereds.net>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [nfsv4] RE: Finding hardlinks
Nicolas Williams wrote:
> On Thu, Jan 04, 2007 at 12:04:14PM +0200, Benny Halevy wrote:
>> I agree that the way the client implements its cache is out of the protocol
>> scope. But how do you interpret "correct behavior" in section 4.2.1?
>> "Clients MUST use filehandle comparisons only to improve performance, not for correct behavior. All clients need to be prepared for situations in which it cannot be determined whether two filehandles denote the same object and in such cases, avoid making invalid assumptions which might cause incorrect behavior."
>> Don't you consider data corruption due to cache inconsistency an incorrect behavior?
>
> If a file with multiple hardlinks appears to have multiple distinct
> filehandles then a client like Trond's will treat it as multiple
> distinct files (with the same hardlink count, and you won't be able to
> find the other links to them -- oh well). Can this cause data
> corruption? Yes, but only if there are applications that rely on the
> different file names referencing the same file, and backup apps on the
> client won't get the hardlinks right either.
The case I'm discussing is multiple filehandles for the same name,
not even for different hardlinks. This causes spurious EIO errors
on the client when the filehandle changes and cache inconsistency
when opening the file multiple times in parallel.
>
> What I don't understand is why getting the fileid is so hard -- always
> GETATTR when you GETFH and you'll be fine. I'm guessing that's not as
> difficult as it is to maintain a hash table of fileids.
It's not difficult at all, just that the client can't rely on the fileids to be
unique in both space and time because of server non-compliance (e.g. netapp's
snapshots) and fileid reuse after delete.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists