lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 06 Sep 2022 13:04:05 -0400
From:   Jeff Layton <jlayton@...nel.org>
To:     Florian Weimer <fweimer@...hat.com>
Cc:     tytso@....edu, adilger.kernel@...ger.ca, djwong@...nel.org,
        david@...morbit.com, trondmy@...merspace.com, neilb@...e.de,
        viro@...iv.linux.org.uk, zohar@...ux.ibm.com, xiubli@...hat.com,
        chuck.lever@...cle.com, lczerner@...hat.com, jack@...e.cz,
        bfields@...ldses.org, brauner@...nel.org,
        linux-man@...r.kernel.org, linux-api@...r.kernel.org,
        linux-btrfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, ceph-devel@...r.kernel.org,
        linux-ext4@...r.kernel.org, linux-nfs@...r.kernel.org,
        linux-xfs@...r.kernel.org
Subject: Re: [RFC PATCH v2] statx, inode: document the new STATX_INO_VERSION
 field

On Tue, 2022-09-06 at 12:41 -0400, Jeff Layton wrote:
> On Tue, 2022-09-06 at 14:17 +0200, Florian Weimer wrote:
> > * Jeff Layton:
> > 
> > > All of the existing implementations use all 64 bits. If you were to
> > > increment a 64 bit value every nanosecond, it will take >500 years for
> > > it to wrap. I'm hoping that's good enough. ;)
> > > 
> > > The implementation that all of the local Linux filesystems use track
> > > whether the value has been queried using one bit, so there you only get
> > > 63 bits of counter.
> > > 
> > > My original thinking here was that we should leave the spec "loose" to
> > > allow for implementations that may not be based on a counter. E.g. could
> > > some filesystem do this instead by hashing certain metadata?
> > 
> > Hashing might have collisions that could be triggered deliberately, so
> > probably not a good idea.  It's also hard to argue that random
> > collisions are unlikely.
> > 
> 
> In principle, if a filesystem could guarantee enough timestamp
> resolution, it's possible collisions could be hard to achieve. It's also
> possible you could factor in other metadata that wasn't necessarily
> visible to userland to try and ensure uniqueness in the counter.
> 
> Still...
> 

Actually, Bruce brought up a good point on IRC. The main danger here is
that we might do this:

Start (i_version is at 1)
write data (i_version goes to 2)
statx+read data (observer associates data with i_version of 2)
Crash, but before new i_version made it to disk
Machine comes back up (i_version back at 1)
write data (i_version goes to 2)
statx (observer assumes his cache is valid)

We can mitigate this by factoring in the ctime when we do the statx.
Another option though would be to factor in the ctime when we generate
the new value and store it.

Here's what nfsd does today:

      chattr =  stat->ctime.tv_sec;
      chattr <<= 30;
      chattr += stat->ctime.tv_nsec;
      chattr += inode_query_iversion(inode);

Instead of doing this after we query it, we could do that before storing
it. After a crash, we might see the value go backward, but if a new
write later happens, the new value would be very unlikely to match the
one that got lost.

That seems quite doable, and might be better for userland consumers
overall.

> > > It's arguable though that the NFSv4 spec requires that this be based on
> > > a counter, as the client is required to increment it in the case of
> > > write delegations.
> > 
> > Yeah, I think it has to be monotonic.
> > 
> 
> I think so too. NFSv4 sort of needs that anyway.
> 
> > > > If the system crashes without flushing disks, is it possible to observe
> > > > new file contents without a change of i_version?
> > > 
> > > Yes, I think that's possible given the current implementations.
> > > 
> > > We don't have a great scheme to combat that at the moment, other than
> > > looking at this in conjunction with the ctime. As long as the clock
> > > doesn't jump backward after the crash and it takes more than one jiffy
> > > to get the host back up, then you can be reasonably sure that
> > > i_version+ctime should never repeat.
> > > 
> > > Maybe that's worth adding to the NOTES section of the manpage?
> > 
> > I'd appreciate that.
> 
> Ok! New version of the manpage patch sent. If no one has strong
> objections to the proposed docs, I'll send out new kernel patches in the
> next day or two.
> 
> Thanks!

-- 
Jeff Layton <jlayton@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ