lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131208032103.GC13776@thunk.org>
Date:	Sat, 7 Dec 2013 22:21:03 -0500
From:	Theodore Ts'o <tytso@....edu>
To:	David Turner <novalis@...alis.org>
Cc:	Andreas Dilger <adilger@...ger.ca>, Mark Harris <mhlk@....us>,
	Jan Kara <jack@...e.cz>,
	Ext4 Developers List <linux-ext4@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v7 1/2] e2fsck: Correct ext4 dates generated by old
 kernels.

On Sat, Dec 07, 2013 at 09:58:17PM -0500, David Turner wrote:
> > We can and should change time_to_string to take an unsigned 64-bit
> > type; it's an internal interface to debugfs.  
> 
> Shouldn't this be a signed 64-bit type, since we have to support times
> before 1970?

That depends on whether we make time_to_string() and string_to_time()
handle the ext4-specific encoding, or whether we make these routines
take an offset in terms of "seconds since the Epoch".  I was assuming
the former, but it does make sense to have ext2fs library functions
which handle the conversion --- but in that case, said library
functions will need to use an explicit 64-bit libext2fs type, since we
can't count on the width of the system-supplied time_t.

> > What I'd probably suggest is to have a mode (set either by an
> > environment variable, or a debugfs command) which causes
> > time_to_string to use an internal version of a 64-bit capable gmtime
> > function.  This will allow for better regression testing, and it in a
> > way that will be have stable results regardless of whether a
> > particular platform, or a future version of glibc has a 64-bit time_t
> > or not.
> 
> Presently, the TZ environment variable selects between gmtime and
> localtime.  How about the following:
> 
> We supply a 64-bit version of gmtime (but not localtime).  If time_t is
> 32 bits, and the date being printed is after 2038, and TZ is not GMT,
> then we return an error message instead of calling localtime.  Then, in
> any of the tests that depend on debugfs date printing we can simply set
> TZ=GMT, so that they will continue to work regardless of the size of
> time_t.

Well, the reason why I wanted a way to explicitly specify the built-in
gmtime was to isolate problems caused by differences in how future
gmtime()'s might be implemented.  Maybe I'm being too paranoid here,
and all of the confusion over leap seconds should be resolved by now.
(e.g., https://groups.google.com/forum/#!topic/comp.unix.programmer/zTXAaa5lnFg)

But we could do this via a special version of TZ (__libext2fs_GMT),
which should be safe since other programs which are using the stock
localtime() function will be fine, since unrecognized TZ values is
interpreted by libc as GMT.

> > Given that we always set ctime when delete a file (which makes sense,
> > since we are decrementing i_links_count), I think changing debugfs
> > (which is the only thing that even looks at dtime, BTW) makes a lot of
> > sense.
> 
> I'm not really very familiar with the ext4 internals.  Are you saying
> that it's safe to just read ctime_extra (without explicitly writing it
> when we write dtime)?  Or just that it is safe to overwrite it when we
> write dtime?

We actually set dtime in ext4_evict_inode(); this gets called when
i_nlink goes to zero, and there are no longer any open file
descriptors referencing the inode.  It is possible for ctime != dtime
if there is still a fd open on the inode at the time of the last
unlink(), and the fd could keep the inode from being actually released
for hours, and in theory, years or decades.

In practice though, the high bits of ctime and dtime should be the
same, unless the above period happens to span one of the encoding
gaps.  The simplest thing to do would be to simply set ctime and dtime
in ext4_evict_inode().  The slightly more complicated thing to do
would be to check to see if the high bits of dtime (if we had a
dtime_extra) are different from ctime_extra, and if so, in that case,
set ctime and dtime.

It doesn't really matter since by the time the inode is unlinked,
nothing else will see the contents of the inode except for a file
system developer who is poking at the file system using debugfs.  It
might be useful in some circumstances to see when the file was
unlinked versus when it was actually finally relased, but that's a
pretty minor edge case.

Regards,

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ