lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 15 Oct 2017 16:37:03 -0700
From:   Kilian Cavalotti <kilian.cavalotti.work@...il.com>
To:     "Theodore Ts'o" <tytso@....edu>
Cc:     Andreas Dilger <adilger@...ger.ca>, linux-ext4@...r.kernel.org
Subject: Re: Recover from a "deleted inode referenced" situation

Hi Ted,

I very much appreciate you taking the time to answer here. Some
comments inline below.

On Sun, Oct 15, 2017 at 5:48 AM, Theodore Ts'o <tytso@....edu> wrote:
> It wasn't from replaying a journal, corrupted or not.  Andreas was
> mistaken there; remounting the file system read/write would not have
> triggered a journal replay; if the journal needed replaying it would
> have been replayed on the read-only mount.
>
> There are two possibilities about what could have happened; one is
> that the file system was already badly corrupted, but your copy
> command hadn't started hitting the corrupted portion of the file
> system, and so it was coincidence that the r/w remount happened right
> before the errors started getting flagged.

That's something I considered indeed, but (and I have recorded session
logs to prove I didn't dream it after the fact) right after the
initial r/o mount, I ran a "du -hs" on some deeper-level directories:
there wasn't any error and "du" returned reasonable values.

The timeline was:
1. mounted r/o
2. "du /mnt/path/to/deep/dir" returned decent value
3. started rsyncĂ­ng data out
4. mounted r/w
5. errors started to appear in the rsync process
6. "ls /mnt/path" returned I/O error and "deleted inode referenced"

So I _think_ that the filesystem was mostly ok before the r/w remount,
because the first-level directory of my initial "du", which worked,
ended up disappearing.

> The second possibility is that is that the allocation bitmaps were
> corrupted, and shortly after you remounted read/write something stated
> to write into your file system, and since the part of the inode table
> areas was marked as "available" the write into the file system ended
> up smashing the inode table.  (More modern kernels enable the
> block_validity option by default, which would have prevented this; but
> if you were using an older kernel, it would not have enabled this
> feature by default.)

Yeah, I think it may have been this: although I didn't explicitly
write to the filesystem, I suspect some system daemon may have...
It's a 3.10 kernel, by the way, with likely some back-ported patches,
but the vendor doesn't provide many details.

> Since the problem started with the resize, I'm actually guessing the
> first is more likely.  Especially if you were using an older version
> of e2fsprogs/resize2fs,

1.42.6, most likely.

> and if you were doing an off-line resize
> (i.e., the file system was unmounted at the time).

I think it started online, but I'm not even sure it actually did it. I
don't have enough logs from that part to be sure what happened. I
believe resize2fs may actually have refused to operate, because of
pre-existing ext4 errors, but in the end, the filesystem appears to
have been resized anyways... So maybe the online tentative didn't
work, and the vendor automated process tried again an offline resize?
Is there any possibility that the filesystem could appear to be
resized (extended) with the actual inode table still referencing the
pre-resize one?

> There were a
> number of bugs with older versions of e2fsprogs with file systems
> larger than 16TB (hence, the 64-bit file system feature was enabled)
> associated with off-line resize, and the manisfestation of these bugs
> includes portions of the inode table getting smashed.
>
> Unfortunately, there may not be a lot we can do, if that's the case.  :-(

The upside is that I'm now learning a lot about file carving tools. :)

> This is probably not a great time to remind people about the value of
> backups, especially off-site backups (even if software was 100%
> bug-free, what if there was a fire at your home/work)?

It's always a good time to remind about backups, and although the bulk
of the most precious data I had was replicated elsewhere, verifying
integrity and consistency of such replications is a whole venture in
itself.
So yeah, from now on, logical replication on different (file)systems,
metadata backup (I had no idea about e2image before) and snapshots
will be non negotiable requirements.

> Sorry,

I appreciate this, truly, and I'm very grateful for the very existence
of ext4. As many catastrophes, this was likely an accumulation of
little things that would have been benign taken independently, but
that each contributed to my data going poof. :)

Cheers,
-- 
Kilian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ