lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <93b6d9f7cf997245bb68409eeb195f9400e55cd0.camel@kernel.org>
Date:   Wed, 21 Sep 2022 06:33:28 -0400
From:   Jeff Layton <jlayton@...nel.org>
To:     Dave Chinner <david@...morbit.com>
Cc:     Theodore Ts'o <tytso@....edu>, NeilBrown <neilb@...e.de>,
        Trond Myklebust <trondmy@...merspace.com>,
        "bfields@...ldses.org" <bfields@...ldses.org>,
        "zohar@...ux.ibm.com" <zohar@...ux.ibm.com>,
        "djwong@...nel.org" <djwong@...nel.org>,
        "brauner@...nel.org" <brauner@...nel.org>,
        "linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>,
        "linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
        "fweimer@...hat.com" <fweimer@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "chuck.lever@...cle.com" <chuck.lever@...cle.com>,
        "linux-man@...r.kernel.org" <linux-man@...r.kernel.org>,
        "linux-nfs@...r.kernel.org" <linux-nfs@...r.kernel.org>,
        "linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
        "jack@...e.cz" <jack@...e.cz>,
        "viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
        "xiubli@...hat.com" <xiubli@...hat.com>,
        "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
        "adilger.kernel@...ger.ca" <adilger.kernel@...ger.ca>,
        "lczerner@...hat.com" <lczerner@...hat.com>,
        "ceph-devel@...r.kernel.org" <ceph-devel@...r.kernel.org>,
        "linux-btrfs@...r.kernel.org" <linux-btrfs@...r.kernel.org>
Subject: Re: [man-pages RFC PATCH v4] statx, inode: document the new
 STATX_INO_VERSION field

On Wed, 2022-09-21 at 10:00 +1000, Dave Chinner wrote:
> On Tue, Sep 20, 2022 at 06:26:05AM -0400, Jeff Layton wrote:
> > On Tue, 2022-09-20 at 10:16 +1000, Dave Chinner wrote:
> > > IOWs, the NFS server can define it's own on-disk persistent metadata
> > > using xattrs, and you don't need local filesystems to be modified at
> > > all. You can add the crash epoch into the change attr that is sent
> > > to NFS clients without having to change the VFS i_version
> > > implementation at all.
> > > 
> > > This whole problem is solvable entirely within the NFS server code,
> > > and we don't need to change local filesystems at all. NFS can
> > > control the persistence and format of the xattrs it uses, and it
> > > does not need new custom on-disk format changes from every
> > > filesystem to support this new application requirement.
> > > 
> > > At this point, NFS server developers don't need to care what the
> > > underlying filesystem format provides - the xattrs provide the crash
> > > detection and enumeration the NFS server functionality requires.
> > > 
> > 
> > Doesn't the filesystem already detect when it's been mounted after an
> > unclean shutdown?
> 
> Not every filesystem will be able to guarantee unclean shutdown
> detection at the next mount. That's the whole problem - NFS
> developers are asking for something that cannot be provided as
> generic functionality by individual filesystems, so the NFS server
> application is going to have to work around any filesytem that
> cannot provide the information it needs.
> 
> e.g. ext4 has it journal replayed by the userspace tools prior
> to mount, so when it then gets mounted by the kernel it's seen as a
> clean mount.
> 
> If we shut an XFS filesystem down due to a filesystem corruption or
> failed IO to the journal code, the kernel might not be able to
> replay the journal on mount (i.e. it is corrupt).  We then run
> xfs_repair, and that fixes the corruption issue and -cleans the
> log-. When we next mount the filesystem, it results in a _clean
> mount_, and the kernel filesystem code can not signal to NFS that an
> unclean mount occurred and so it should bump it's crash counter.
> 
> IOWs, this whole "filesystems need to tell NFS about crashes"
> propagates all the way through *every filesystem tool chain*, not
> just the kernel mount code. And we most certainly don't control
> every 3rd party application that walks around in the filesystem on
> disk format, and so there are -zero- guarantees that the kernel
> filesystem mount code can give that an unclean shutdown occurred
> prior to the current mount.
> 
> And then for niche NFS server applications (like transparent
> fail-over between HA NFS servers) there are even more rigid
> constraints on NFS change attributes. And you're asking local
> filesystems to know about these application constraints and bake
> them into their on-disk format again.
> 
> This whole discussion has come about because we baked certain
> behaviour for NFS into the on-disk format many, many years ago, and
> it's only now that it is considered inadequate for *new* NFS
> application related functionality (e.g. fscache integration and
> cache validity across server side mount cycles).
> 
> We've learnt a valuable lesson from this: don't bake application
> specific persistent metadata requirements into the on-disk format
> because when the application needs to change, it requires every
> filesystem that supports taht application level functionality
> to change their on-disk formats...
> 
> > I'm not sure what good we'll get out of bolting this
> > scheme onto the NFS server, when the filesystem could just as easily
> > give us this info.
> 
> The xattr scheme guarantees the correct application behaviour that the NFS
> server requires, all at the NFS application level without requiring
> local filesystems to support the NFS requirements in their on-disk
> format. THe NFS server controls the format and versioning of it's
> on-disk persistent metadata (i.e. the xattrs it uses) and so any
> changes to the application level requirements of that functionality
> are now completely under the control of the application.
> 
> i.e. the application gets to manage version control, backwards and
> forwards compatibility of it's persistent metadata, etc. What you
> are asking is that every local filesystem takes responsibility for
> managing the long term persistent metadata that only NFS requires.
> It's more complex to do this at the filesystem level, and we have to
> replicate the same work for every filesystem that is going to
> support this on-disk functionality.
> 
> Using xattrs means the functionality is implemented once, it's
> common across all local filesystems, and no exportable filesystem
> needs to know anything about it as it's all self-contained in the
> NFS server code. THe code is smaller, easier to maintain, consistent
> across all systems, easy to test, etc.
> 
> It also can be implemented and rolled out *immediately* to all
> existing supported NFS server implementations, without having to
> wait months/years (or never!) for local filesystem on-disk format
> changes to roll out to production systems.
> 
> Asking individual filesystems to implement application specific
> persistent metadata is a *last resort* and should only be done if
> correctness or performance cannot be obtained in any other way.
> 
> So, yeah, the only sane direction to take here is to use xattrs to
> store this NFS application level information. It's less work for
> everyone, and in the long term it means when the NFS application
> requirements change again, we don't need to modify the on-disk
> format of multiple local filesystems.
> 
> > In any case, the main problem at this point is not so much in detecting
> > when there has been an unclean shutdown, but rather what to do when
> > there is one. We need to to advance the presented change attributes
> > beyond the largest possible one that may have been handed out prior to
> > the crash. 
> 
> Sure, but you're missing my point: by using xattrs for detection,
> you don't need to involve anything to do with local filesystems at
> all.
> 
> > How do we determine what that offset should be? Your last email
> > suggested that there really is no limit to the number of i_version bumps
> > that can happen in memory before one of them makes it to disk. What can
> > we do to address that?
> 
> <shrug>
> 
> I'm just pointing out problems I see when defining this as behaviour
> for on-disk format purposes. If we define it as part of the on-disk
> format, then we have to be concerned about how it may be used
> outside the scope of just the NFS server application. 
> 
> However, If NFS keeps this metadata and functionaly entirely
> contained at the application level via xattrs, I really don't care
> what algorithm NFS developers decides to use for their crash
> sequencing. It's not my concern at this point, and that's precisely
> why NFS should be using xattrs for this NFS specific functionality.
> 

I get it: you'd rather not have to deal with what you see as an NFS
problem, but I don't get how what you're proposing solves anything. We
might be able to use that scheme to detect crashes, but that's only part
of the problem (and it's a relatively simple part of the problem to
solve, really).

Maybe you can clarify it for me:

Suppose we go with what you're saying and store some information in
xattrs that allows us to detect crashes in some fashion. The server
crashes and comes back up and we detect that there was a crash earlier.

What does nfsd need to do now to ensure that it doesn't hand out a
duplicate change attribute? 

Until we can answer that question, detecting crashes doesn't matter.
-- 
Jeff Layton <jlayton@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ