lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 9 Sep 2022 01:05:15 +0000
From:   Trond Myklebust <trondmy@...merspace.com>
To:     "neilb@...e.de" <neilb@...e.de>
CC:     "zohar@...ux.ibm.com" <zohar@...ux.ibm.com>,
        "djwong@...nel.org" <djwong@...nel.org>,
        "xiubli@...hat.com" <xiubli@...hat.com>,
        "brauner@...nel.org" <brauner@...nel.org>,
        "bfields@...ldses.org" <bfields@...ldses.org>,
        "linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
        "linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>,
        "david@...morbit.com" <david@...morbit.com>,
        "fweimer@...hat.com" <fweimer@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "jlayton@...nel.org" <jlayton@...nel.org>,
        "chuck.lever@...cle.com" <chuck.lever@...cle.com>,
        "linux-man@...r.kernel.org" <linux-man@...r.kernel.org>,
        "linux-nfs@...r.kernel.org" <linux-nfs@...r.kernel.org>,
        "tytso@....edu" <tytso@....edu>,
        "viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
        "jack@...e.cz" <jack@...e.cz>,
        "linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
        "linux-btrfs@...r.kernel.org" <linux-btrfs@...r.kernel.org>,
        "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
        "adilger.kernel@...ger.ca" <adilger.kernel@...ger.ca>,
        "lczerner@...hat.com" <lczerner@...hat.com>,
        "ceph-devel@...r.kernel.org" <ceph-devel@...r.kernel.org>
Subject: Re: [man-pages RFC PATCH v4] statx, inode: document the new
 STATX_INO_VERSION field

On Fri, 2022-09-09 at 10:51 +1000, NeilBrown wrote:
> On Fri, 09 Sep 2022, Trond Myklebust wrote:
> > On Fri, 2022-09-09 at 08:55 +1000, NeilBrown wrote:
> > > On Fri, 09 Sep 2022, Jeff Layton wrote:
> > > > On Thu, 2022-09-08 at 11:21 -0400, Theodore Ts'o wrote:
> > > > > On Thu, Sep 08, 2022 at 10:33:26AM +0200, Jan Kara wrote:
> > > > > > It boils down to the fact that we don't want to call
> > > > > > mark_inode_dirty()
> > > > > > from IOCB_NOWAIT path because for lots of filesystems that
> > > > > > means journal
> > > > > > operation and there are high chances that may block.
> > > > > > 
> > > > > > Presumably we could treat inode dirtying after i_version
> > > > > > change
> > > > > > similarly
> > > > > > to how we handle timestamp updates with lazytime mount
> > > > > > option
> > > > > > (i.e., not
> > > > > > dirty the inode immediately but only with a delay) but then
> > > > > > the
> > > > > > time window
> > > > > > for i_version inconsistencies due to a crash would be much
> > > > > > larger.
> > > > > 
> > > > > Perhaps this is a radical suggestion, but there seems to be a
> > > > > lot
> > > > > of
> > > > > the problems which are due to the concern "what if the file
> > > > > system
> > > > > crashes" (and so we need to worry about making sure that any
> > > > > increments to i_version MUST be persisted after it is
> > > > > incremented).
> > > > > 
> > > > > Well, if we assume that unclean shutdowns are rare, then
> > > > > perhaps
> > > > > we
> > > > > shouldn't be optimizing for that case.  So.... what if a file
> > > > > system
> > > > > had a counter which got incremented each time its journal is
> > > > > replayed
> > > > > representing an unclean shutdown.  That shouldn't happen
> > > > > often,
> > > > > but if
> > > > > it does, there might be any number of i_version updates that
> > > > > may
> > > > > have
> > > > > gotten lost.  So in that case, the NFS client should
> > > > > invalidate
> > > > > all of
> > > > > its caches.
> > > > > 
> > > > > If the i_version field was large enough, we could just prefix
> > > > > the
> > > > > "unclean shutdown counter" with the existing i_version number
> > > > > when it
> > > > > is sent over the NFS protocol to the client.  But if that
> > > > > field
> > > > > is too
> > > > > small, and if (as I understand things) NFS just needs to know
> > > > > when
> > > > > i_version is different, we could just simply hash the
> > > > > "unclean
> > > > > shtudown counter" with the inode's "i_version counter", and
> > > > > let
> > > > > that
> > > > > be the version which is sent from the NFS client to the
> > > > > server.
> > > > > 
> > > > > If we could do that, then it doesn't become critical that
> > > > > every
> > > > > single
> > > > > i_version bump has to be persisted to disk, and we could
> > > > > treat it
> > > > > like
> > > > > a lazytime update; it's guaranteed to updated when we do an
> > > > > clean
> > > > > unmount of the file system (and when the file system is
> > > > > frozen),
> > > > > but
> > > > > on a crash, there is no guaranteee that all i_version bumps
> > > > > will
> > > > > be
> > > > > persisted, but we do have this "unclean shutdown" counter to
> > > > > deal
> > > > > with
> > > > > that case.
> > > > > 
> > > > > Would this make life easier for folks?
> > > > > 
> > > > >                                                 - Ted
> > > > 
> > > > Thanks for chiming in, Ted. That's part of the problem, but
> > > > we're
> > > > actually not too worried about that case:
> > > > 
> > > > nfsd mixes the ctime in with i_version, so you'd have to
> > > > crash+clock
> > > > jump backward by juuuust enough to allow you to get the
> > > > i_version
> > > > and
> > > > ctime into a state it was before the crash, but with different
> > > > data.
> > > > We're assuming that that is difficult to achieve in practice.
> > > > 
> > > > The issue with a reboot counter (or similar) is that on an
> > > > unclean
> > > > crash
> > > > the NFS client would end up invalidating every inode in the
> > > > cache,
> > > > as
> > > > all of the i_versions would change. That's probably excessive.
> > > > 
> > > > The bigger issue (at the moment) is atomicity: when we fetch an
> > > > i_version, the natural inclination is to associate that with
> > > > the
> > > > state
> > > > of the inode at some point in time, so we need this to be
> > > > updated
> > > > atomically with certain other attributes of the inode. That's
> > > > the
> > > > part
> > > > I'm trying to sort through at the moment.
> > > 
> > > I don't think atomicity matters nearly as much as ordering.
> > > 
> > > The i_version must not be visible before the change that it
> > > reflects.
> > > It is OK for it to be after.  Even seconds after without great
> > > cost. 
> > > It
> > > is bad for it to be earlier.  Any unlocked gap after the
> > > i_version
> > > update and before the change is visible can result in a race and
> > > incorrect caching.
> > > 
> > > Even for directory updates where NFSv4 wants atomic before/after
> > > version
> > > numbers, they don't need to be atomic w.r.t. the change being
> > > visible.
> > > 
> > > If three concurrent file creates cause the version number to go
> > > from
> > > 4
> > > to 7, then it is important that one op sees "4,5", one sees "5,6"
> > > and
> > > one sees "6,7", but it doesn't matter if concurrent lookups only
> > > see
> > > version 4 even while they can see the newly created names.
> > > 
> > > A longer gap increases the risk of an unnecessary cache flush,
> > > but it
> > > doesn't lead to incorrectness.
> > > 
> > 
> > I'm not really sure what you mean when you say that a 'longer gap
> > increases the risk of an unnecessary cache flush'. Either the
> > change
> > attribute update is atomic with the operation it is recording, or
> > it is
> > not. If that update is recorded in the NFS reply as not being
> > atomic,
> > then the client will evict all cached data that is associated with
> > that
> > change attribute at some point.
> > 
> > > So I think we should put the version update *after* the change is
> > > visible, and not require locking (beyond a memory barrier) when
> > > reading
> > > the version. It should be as soon after as practical, bit no
> > > sooner.
> > > 
> > 
> > Ordering is not a sufficient condition. The guarantee needs to be
> > that
> > any application that reads the change attribute, then reads file
> > data
> > and then reads the change attribute again will see the 2 change
> > attribute values as being the same *if and only if* there were no
> > changes to the file data made after the read and before the read of
> > the
> > change attribute.
> 
> I'm say that only the "only if" is mandatory - getting that wrong has
> a
> correctness cost.
> BUT the "if" is less critical.  Getting that wrong has a performance
> cost.  We want to get it wrong as rarely as possible, but there is a
> performance cost to the underlying filesystem in providing
> perfection,
> and that must be balanced with the performance cost to NFS of
> providing
> imperfect results.
> 
I strongly disagree.

If the 2 change attribute values are different, then it is OK for the
file data to be the same, but if the file data has changed, then the
change attributes MUST differ.

Conversely, if the 2 change attributes are the same then it MUST be the
case that the file data did not change.

So it really needs to be an 'if and only if' case.

> For NFSv4, this is of limited interest for files.
> If the client has a delegation, then it is certain that no other
> client
> or server-side application will change the file, so it doesn't need
> to
> pay much attention to change ids.
> If the client doesn't have a delegation, then if there is any change
> to
> the changeid, the client cannot be certain that the change wasn't due
> to
> some other client, so it must purge its cache on close or lock.  So
> fine
> details of the changeid aren't interesting (as long as we have the
> "only
> if"). 
> 
> For directories, NFSv4 does want precise changeids, but directory ops
> needs to be sync for NFS anyway, so the extra burden on the fs is
> small.
> 
> 
> > That includes the case where data was written after the read, and a
> > crash occurred after it was committed to stable storage. If you
> > only
> > update the version after the written data is visible, then there is
> > a
> > possibility that the crash could occur before any change attribute
> > update is committed to disk.
> 
> I think we all agree that handling a crash is hard.  I think that
> should be a separate consideration to how i_version is handled during
> normal running.
> 
> > 
> > IOW: the minimal condition needs to be that for all cases below,
> > the
> > application reads 'state B' as having occurred if any data was
> > committed to disk before the crash.
> > 
> > Application                             Filesystem
> > ===========                             ==========
> > read change attr <- 'state A'
> > read data <- 'state A'
> >                                         write data -> 'state B'
> >                                         <crash>+<reboot>
> > read change attr <- 'state B'
> 
> The important thing here is to not see 'state A'.  Seeing 'state C'
> should be acceptable.  Worst case we could merge in wall-clock time
> of
> system boot, but the filesystem should be able to be more helpful
> than
> that.
> 
Agreed.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@...merspace.com


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ