lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 13 Sep 2022 09:14:32 +1000
From:   "NeilBrown" <neilb@...e.de>
To:     "J. Bruce Fields" <bfields@...ldses.org>
Cc:     "Jeff Layton" <jlayton@...nel.org>,
        "Theodore Ts'o" <tytso@....edu>, "Jan Kara" <jack@...e.cz>,
        adilger.kernel@...ger.ca, djwong@...nel.org, david@...morbit.com,
        trondmy@...merspace.com, viro@...iv.linux.org.uk,
        zohar@...ux.ibm.com, xiubli@...hat.com, chuck.lever@...cle.com,
        lczerner@...hat.com, brauner@...nel.org, fweimer@...hat.com,
        linux-man@...r.kernel.org, linux-api@...r.kernel.org,
        linux-btrfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, ceph-devel@...r.kernel.org,
        linux-ext4@...r.kernel.org, linux-nfs@...r.kernel.org,
        linux-xfs@...r.kernel.org
Subject: Re: [man-pages RFC PATCH v4] statx, inode: document the new
 STATX_INO_VERSION field

On Mon, 12 Sep 2022, J. Bruce Fields wrote:
> On Sun, Sep 11, 2022 at 08:13:11AM +1000, NeilBrown wrote:
> > On Fri, 09 Sep 2022, Jeff Layton wrote:
> > > 
> > > The machine crashes and comes back up, and we get a query for i_version
> > > and it comes back as X. Fine, it's an old version. Now there is a write.
> > > What do we do to ensure that the new value doesn't collide with X+1? 
> > 
> > (I missed this bit in my earlier reply..)
> > 
> > How is it "Fine" to see an old version?
> > The file could have changed without the version changing.
> > And I thought one of the goals of the crash-count was to be able to
> > provide a monotonic change id.
> 
> I was still mainly thinking about how to provide reliable close-to-open
> semantics between NFS clients.  In the case the writer was an NFS
> client, it wasn't done writing (or it would have COMMITted), so those
> writes will come in and bump the change attribute soon, and as long as
> we avoid the small chance of reusing an old change attribute, we're OK,
> and I think it'd even still be OK to advertise
> CHANGE_TYPE_IS_MONOTONIC_INCR.

You seem to be assuming that the client doesn't crash at the same time
as the server (maybe they are both VMs on a host that lost power...)

If client A reads and caches, client B writes, the server crashes after
writing some data (to already allocated space so no inode update needed)
but before writing the new i_version, then client B crashes.
When server comes back the i_version will be unchanged but the data has
changed.  Client A will cache old data indefinitely...



> 
> If we're trying to do better than that, I'm just not sure what's right.

I think we need to require the filesystem to ensure that the i_version
is seen to increase shortly after any change becomes visible in the
file, and no later than the moment when the request that initiated the
change is acknowledged as being complete.  In the case of an unclean
restart, any file that is not known to have been unchanged immediately
before the crash must have i_version increased.

The simplest implementation is to have an unclean-restart counter and to
always included this multiplied by some constant X in the reported
i_version.  The filesystem guarantees to record (e.g.  to journal
at least) the i_version if it comes close to X more than the previous
record.  The filesystem gets to choose X.

A more complex solution would be to record (similar to the way orphans
are recorded) any file which is open for write, and to add X to the
i_version for any "dirty" file still recorded during an unclean restart.
This would avoid bumping the i_version for read-only files.

There may be other solutions, but we should leave that up to the
filesystem.  Each filesystem might choose something different.

NeilBrown

Powered by blists - more mailing lists