lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d9c065939af2728b1c0768d5ef7526995b634902.camel@kernel.org>
Date:   Fri, 16 Sep 2022 07:32:29 -0400
From:   Jeff Layton <jlayton@...nel.org>
To:     NeilBrown <neilb@...e.de>
Cc:     "J. Bruce Fields" <bfields@...ldses.org>,
        Theodore Ts'o <tytso@....edu>, Jan Kara <jack@...e.cz>,
        adilger.kernel@...ger.ca, djwong@...nel.org, david@...morbit.com,
        trondmy@...merspace.com, viro@...iv.linux.org.uk,
        zohar@...ux.ibm.com, xiubli@...hat.com, chuck.lever@...cle.com,
        lczerner@...hat.com, brauner@...nel.org, fweimer@...hat.com,
        linux-man@...r.kernel.org, linux-api@...r.kernel.org,
        linux-btrfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, ceph-devel@...r.kernel.org,
        linux-ext4@...r.kernel.org, linux-nfs@...r.kernel.org,
        linux-xfs@...r.kernel.org
Subject: Re: [man-pages RFC PATCH v4] statx, inode: document the new
 STATX_INO_VERSION field

On Fri, 2022-09-16 at 08:42 +1000, NeilBrown wrote:
> On Fri, 16 Sep 2022, Jeff Layton wrote:
> > On Thu, 2022-09-15 at 10:06 -0400, J. Bruce Fields wrote:
> > > On Tue, Sep 13, 2022 at 09:14:32AM +1000, NeilBrown wrote:
> > > > On Mon, 12 Sep 2022, J. Bruce Fields wrote:
> > > > > On Sun, Sep 11, 2022 at 08:13:11AM +1000, NeilBrown wrote:
> > > > > > On Fri, 09 Sep 2022, Jeff Layton wrote:
> > > > > > > 
> > > > > > > The machine crashes and comes back up, and we get a query for i_version
> > > > > > > and it comes back as X. Fine, it's an old version. Now there is a write.
> > > > > > > What do we do to ensure that the new value doesn't collide with X+1? 
> > > > > > 
> > > > > > (I missed this bit in my earlier reply..)
> > > > > > 
> > > > > > How is it "Fine" to see an old version?
> > > > > > The file could have changed without the version changing.
> > > > > > And I thought one of the goals of the crash-count was to be able to
> > > > > > provide a monotonic change id.
> > > > > 
> > > > > I was still mainly thinking about how to provide reliable close-to-open
> > > > > semantics between NFS clients.  In the case the writer was an NFS
> > > > > client, it wasn't done writing (or it would have COMMITted), so those
> > > > > writes will come in and bump the change attribute soon, and as long as
> > > > > we avoid the small chance of reusing an old change attribute, we're OK,
> > > > > and I think it'd even still be OK to advertise
> > > > > CHANGE_TYPE_IS_MONOTONIC_INCR.
> > > > 
> > > > You seem to be assuming that the client doesn't crash at the same time
> > > > as the server (maybe they are both VMs on a host that lost power...)
> > > > 
> > > > If client A reads and caches, client B writes, the server crashes after
> > > > writing some data (to already allocated space so no inode update needed)
> > > > but before writing the new i_version, then client B crashes.
> > > > When server comes back the i_version will be unchanged but the data has
> > > > changed.  Client A will cache old data indefinitely...
> > > 
> > > I guess I assume that if all we're promising is close-to-open, then a
> > > client isn't allowed to trust its cache in that situation.  Maybe that's
> > > an overly draconian interpretation of close-to-open.
> > > 
> > > Also, I'm trying to think about how to improve things incrementally.
> > > Incorporating something like a crash count into the on-disk i_version
> > > fixes some cases without introducing any new ones or regressing
> > > performance after a crash.
> > > 
> > 
> > I think we ought to start there.
> > 
> > > If we subsequently wanted to close those remaining holes, I think we'd
> > > need the change attribute increment to be seen as atomic with respect to
> > > its associated change, both to clients and (separately) on disk.  (That
> > > would still allow the change attribute to go backwards after a crash, to
> > > the value it held as of the on-disk state of the file.  I think clients
> > > should be able to deal with that case.)
> > > 
> > > But, I don't know, maybe a bigger hammer would be OK:
> > > 
> > > > I think we need to require the filesystem to ensure that the i_version
> > > > is seen to increase shortly after any change becomes visible in the
> > > > file, and no later than the moment when the request that initiated the
> > > > change is acknowledged as being complete.  In the case of an unclean
> > > > restart, any file that is not known to have been unchanged immediately
> > > > before the crash must have i_version increased.
> > > > 
> > > > The simplest implementation is to have an unclean-restart counter and to
> > > > always included this multiplied by some constant X in the reported
> > > > i_version.  The filesystem guarantees to record (e.g.  to journal
> > > > at least) the i_version if it comes close to X more than the previous
> > > > record.  The filesystem gets to choose X.
> > > 
> > > So the question is whether people can live with invalidating all client
> > > caches after a cache.  I don't know.
> > > 
> > 
> > I assume you mean "after a crash". Yeah, that is pretty nasty. We don't
> > get perfect crash resilience with incorporating this into the on-disk
> > value, but I like that better than factoring it in at presentation time.
> > 
> > That would mean that the servers would end up getting hammered with read
> > activity after a crash (at least in some environments). I don't think
> > that would be worth the tradeoff. There's a real benefit to preserving
> > caches when we can.
> 
> Would it really mean the server gets hammered?
> 

Traditionally, yes. That was the rationale for fscache, after all.
Particularly in large renderfarms, when rebooting a large swath of
client machines, they end up with blank caches and when they come up
they hammer the server with READs.

We'll be back to that behavior after a crash with this scheme, since
fscache uses the change attribute to determine cache validity. I guess
that's unavoidable for now.

> For files and NFSv4, any significant cache should be held on the basis
> of a delegation, and if the client holds a delegation then it shouldn't
> be paying attention to i_version.
> 
> I'm not entirely sure of this.  Section 10.2.1 of RFC 5661 seems to
> suggest that when the client uses CLAIM_DELEG_PREV to reclaim a
> delegation, it must then return the delegation.  However the explanation
> seems to be mostly about WRITE delegations and immediately flushing
> cached changes.  Do we know if there is a way for the server to say "OK,
> you have that delegation again" in a way that the client can keep the
> delegation and continue to ignore i_version?
> 

Delegations may change that calculus. In general I've noticed that the
client tends to ignore attribute cache changes when it has a delegation.

> For directories, which cannot be delegated the same way but can still be
> cached, the issues are different.  All directory morphing operations
> will be journalled by the filesystem so it should be able to keep the
> i_version up to date.  So the (journalling) filesystem should *NOT* add
> a crash-count to the i_version for directories even if it does for files.
> 

Interesting and good point. We should be able to make that distinction
and just mix in the crash counter for regular files.

> 
> 
> > 
> > > > A more complex solution would be to record (similar to the way orphans
> > > > are recorded) any file which is open for write, and to add X to the
> > > > i_version for any "dirty" file still recorded during an unclean
> > > > restart.  This would avoid bumping the i_version for read-only files.
> > > 
> > > Is that practical?  Working out the performance tradeoffs sounds like a
> > > project.
> > > 
> > > 
> > > > There may be other solutions, but we should leave that up to the
> > > > filesystem.  Each filesystem might choose something different.
> > > 
> > > Sure.
> > > 
> > 
> > Agreed here too. I think we need to allow for some flexibility here. 
> > 
> > Here's what I'm thinking:
> > 
> > We'll carve out the upper 16 bits in the i_version counter to be the
> > crash counter field. That gives us 8k crashes before we have to worry
> > about collisions. Hopefully the remaining 47 bits of counter will be
> > plenty given that we don't increment it when it's not being queried or
> > nothing else changes. (Can we mitigate wrapping here somehow?)
> > 
> > The easiest way to do this would be to add a u16 s_crash_counter to
> > struct super_block. We'd initialize that to 0, and the filesystem could
> > fill that value out at mount time.
> > 
> > Then inode_maybe_inc_iversion can just shift the s_crash_counter that
> > left by 24 bits and and plop it into the top of the value we're
> > preparing to cmpxchg into place.
> > 
> > This is backward compatible too, at least for i_version counter values
> > that are <2^47. With anything larger, we might end up with something
> > going backward and a possible collision, but it's (hopefully) a small
> > risk.
> > 
> > -- 
> > Jeff Layton <jlayton@...nel.org>
> > 

-- 
Jeff Layton <jlayton@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ