lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 26 May 2008 12:24:28 -0600
From:	Andreas Dilger <>
To:	Theodore Tso <tytso@....EDU>
	Girish Shilamkar <>
Subject: Re: What to do when the journal checksum is incorrect

On May 25, 2008  07:38 -0400, Theodore Ts'o wrote:
> Well, what are the alternatives?  Remember, we could have potentially
> 50-100 megabytes of stale metadata that haven't been written to
> filesystem.  And unlike ext2, we've deliberately held back writing
> back metadata by pinning it so, things could be much worse.  So let's
> tick off the possibilities:
> * An individual data block is bad --- we write complete garbage into
>   the filesystem, which means in the worst case we lose 32 inodes
>   (unless that inode table block is repeated later in the journal), 1
>   directory block (causing files to land in lost+found), one bitmap
>   block (which e2fsck can regenerate), or a data block (if data=jouranalled).
> * A journal descriptor block is bad --- if it's just a bit-flip, we
>   could end up writing a data block in the wrong place, which would be
>   bad; if it's complete garbage, we will probably assume the journal
>   ended early, and leave the filesystem silently badly corrupted.
> * The journal commit block is bad --- probably we will just silently
>   assume the journal ended early, unless the bit-flip happened exactly
>   in the CRC field.
> The most common case is that one or more individual data blocks in the
> journal are bad, and the question is whether writing that garbage into
> the filesystem is better or worse than aborting the journal right then
> and there.

You are focussing on the case where 1 or 2 filesystem blocks in the
journal are bad, but I suspect the real-world cases are more likely to
be 1 or 2MB of data are bad, or more.  Considering that a disk sector
is at least 4 or 64kB in size, and problems like track misalignment
(overpowered seek), write failure (high-flying write), or device cache
reordering problems will result in a large number of bad blocks in the
journal, I don't think 1 or 2 filesystem is a realistic failure scenario

> The problem with only replaying the "good" part of the journal is the
> kernel then truncates the journal, and it leaves e2fsck with no way of
> doing anything intelligent afterwards.  So another possibility is to
> not replay the journal at all, and fail the mount unless the
> filesystem is being mounted read-only; but the question is whether we
> are better off not replaying the journal at *all*, or just replaying
> part of it.

I'd think at a minimum to replay the journal up to the bad transaction.
That the current code is broken and also replays the bad transaction is
of course incorrect.  The probability that later transactions have
begun checkpointing their blocks to the filesystem is decreasing for
each later transaction after the bad one, so the probability of those
changes corrupting the filesystem are correspondingly lower.

> Consider that if /boot/grub/menu.lst got written, and one of its data
> block was previously directory block that had since gotten deleted,
> but in the journal and had been revoked, replaying part of the journal
> might make the system non-bootable.

Sure, such scenarios exist, but the architecture of ext3/4 is that the
data block will _likely_ have been rewritten in the same place.  The
more likely case is that some important filesystem metadata (itable,
indirect blocks of files, etc) is being overwritten and corruption in
the journal is a laser-guided missile to finding all of the important
blocks in the filesystem to spread that corruption to.

> So the other alternative I seriously considered was not replaying the
> journal at all, and bailing out after seeing the bad checksum --- but
> that just defers the problem to e2fsck, and e2fsck can't really do
> anything much different, and the tools to allow a human to make a
> decision on a block by block basis in the journal don't exist, and
> even if they did would make more system administrators run screaming.
> I suspect the *best* approach is to change the journal format one more
> time, and include a CRC on a per-block basis in the descriptor blocks,
> and a CRC for the entire descriptor block.  That way, we can decide
> what to replay or not on a per-block basis.

Yes, I was thinking exactly this same thing.  This would give the maximum
probability of the correct outcome, because only "correct" blocks are
checkpointed into the filesystem, and at least an old version of the
block is present in the filesystem (unless it is a new block).  The chance
also exists that a later transaction will even overwrite the bad block,
which will avoid even the need to invoke e2fsck.

This would need:
- a checksum in the per-block transaction record (tag).  One option is
  to keep an 8- or 16-bit checksum in the "flags" field, to keep it
  compatible with older JBD implementations.
- a checksum of the commit header and tags to ensure we can trust the
  per-block checksums, and we don't need a huge checksum for each block.

Cheers, Andreas
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists