lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YxoIjV50xXKiLdL9@mit.edu>
Date:   Thu, 8 Sep 2022 11:21:49 -0400
From:   "Theodore Ts'o" <tytso@....edu>
To:     Jan Kara <jack@...e.cz>
Cc:     NeilBrown <neilb@...e.de>, Jeff Layton <jlayton@...nel.org>,
        "J. Bruce Fields" <bfields@...ldses.org>, adilger.kernel@...ger.ca,
        djwong@...nel.org, david@...morbit.com, trondmy@...merspace.com,
        viro@...iv.linux.org.uk, zohar@...ux.ibm.com, xiubli@...hat.com,
        chuck.lever@...cle.com, lczerner@...hat.com, brauner@...nel.org,
        fweimer@...hat.com, linux-man@...r.kernel.org,
        linux-api@...r.kernel.org, linux-btrfs@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        ceph-devel@...r.kernel.org, linux-ext4@...r.kernel.org,
        linux-nfs@...r.kernel.org, linux-xfs@...r.kernel.org
Subject: Re: [man-pages RFC PATCH v4] statx, inode: document the new
 STATX_INO_VERSION field

On Thu, Sep 08, 2022 at 10:33:26AM +0200, Jan Kara wrote:
> It boils down to the fact that we don't want to call mark_inode_dirty()
> from IOCB_NOWAIT path because for lots of filesystems that means journal
> operation and there are high chances that may block.
> 
> Presumably we could treat inode dirtying after i_version change similarly
> to how we handle timestamp updates with lazytime mount option (i.e., not
> dirty the inode immediately but only with a delay) but then the time window
> for i_version inconsistencies due to a crash would be much larger.

Perhaps this is a radical suggestion, but there seems to be a lot of
the problems which are due to the concern "what if the file system
crashes" (and so we need to worry about making sure that any
increments to i_version MUST be persisted after it is incremented).

Well, if we assume that unclean shutdowns are rare, then perhaps we
shouldn't be optimizing for that case.  So.... what if a file system
had a counter which got incremented each time its journal is replayed
representing an unclean shutdown.  That shouldn't happen often, but if
it does, there might be any number of i_version updates that may have
gotten lost.  So in that case, the NFS client should invalidate all of
its caches.

If the i_version field was large enough, we could just prefix the
"unclean shutdown counter" with the existing i_version number when it
is sent over the NFS protocol to the client.  But if that field is too
small, and if (as I understand things) NFS just needs to know when
i_version is different, we could just simply hash the "unclean
shtudown counter" with the inode's "i_version counter", and let that
be the version which is sent from the NFS client to the server.

If we could do that, then it doesn't become critical that every single
i_version bump has to be persisted to disk, and we could treat it like
a lazytime update; it's guaranteed to updated when we do an clean
unmount of the file system (and when the file system is frozen), but
on a crash, there is no guaranteee that all i_version bumps will be
persisted, but we do have this "unclean shutdown" counter to deal with
that case.

Would this make life easier for folks?

						- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ