lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120416073655.7cdb90cf@corrin.poochiereds.net>
Date:	Mon, 16 Apr 2012 07:36:55 -0400
From:	Jeff Layton <jlayton@...hat.com>
To:	Bernd Schubert <bernd.schubert@...m.fraunhofer.de>
Cc:	Malahal Naineni <malahal@...ibm.com>, linux-nfs@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	pstaubach@...grid.com, miklos@...redi.hu, viro@...IV.linux.org.uk,
	hch@...radead.org, michael.brantley@...haw.com,
	sven.breuner@...m.fraunhofer.de
Subject: Re: [PATCH RFC] vfs: make fstatat retry on ESTALE errors from
 getattr call

On Sun, 15 Apr 2012 21:03:23 +0200
Bernd Schubert <bernd.schubert@...m.fraunhofer.de> wrote:

> On 04/13/2012 05:42 PM, Jeff Layton wrote:
> > (note: please don't trim the CC list!)
> > 
> > Indefinitely does make some sense (as Peter articulated in his original
> > set). It's possible you could race several times in a row, or a server
> > misconfiguration or something has happened and you have a transient
> > error that will eventually recover. His assertion was that any limit on
> > the number of retries is by definition wrong. For NFS, a fatal signal
> > ought to interrupt things as well, so retrying indefinitely has some
> > appeal there.
> > 
> > OTOH, we do have to contend with filesystems that might return ESTALE
> > persistently for other reasons and that might not respond to signals.
> > Miklos pointed out that some FUSE fs' do this in his review of Peter's
> > set.
> > 
> > As a purely defensive coding measure, limiting the number of retries to
> > something finite makes sense. If we're going to do that though, I'd
> > probably recommend that we set the number of retries be something
> > higher just so that this is more resilient in the face of multiple
> > races. Those other fs' might "spin" a bit in that case but it is an
> > error condition and IMO resiliency trumps performance -- at least in
>  this case.
> 
> I am definitely voting against an infinite number of retries. I'm
> working on FhGFS, which supports distributed meta data servers. So when
> a file is moved around between directories, its file handle, which
> contains the meta-data target id might become invalid. As NFSv3 is
> stateless we cannot inform the client about that and must return ESTALE
> then. NFSv4 is better, but I'm not sure how well invalidating a file
> handle works. So retrying once on ESTALE might be a good idea, but
> retrying forever is not.

It's important to note that I'm only proposing to wrap syscalls this
way that take a pathname argument. We can't do anything about those
that don't since at that point we have no way to retry the lookup.

So, I'm not sure this patch would affect the case you're concerned
about one way or another. If you move the file to a different
directory, then it's pathname would also change, and at that point
you'd end up with an ENOENT error or something on the next retry.

If the file was open and you were (for instance) reading or writing to
it from a client when you moved it, then we can't retry the lookup at
that point. The open is long since done and the pathname is now gone.
You'll get an ESTALE back in userspace regardless.

> Also, what about asymmetric HA servers? I believe to remember that also
> resulted in ESTALE. So for example server1 exports /home and /scratch,
> but on failure server2 can only take over /home and denies access to
> /scratch.
> 

That sounds like a broken cluster configuration. Still...

Presumably at some point in the future, a sysadmin would intervene and
fix the situation such that /scratch is available again. Is it better
to return an error to the application at that point, or simply allow it
to keep retrying until the problem has been fixed?

The person with the long running job that's doing operations
in /scratch would probably prefer the latter. If not, then they could
always send the program a fatal signal to stop it altogether.

-- 
Jeff Layton <jlayton@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ