lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090512121305.GL21518@mit.edu>
Date:	Tue, 12 May 2009 08:13:06 -0400
From:	Theodore Tso <tytso@....edu>
To:	Alexander Shishkin <alexander.shishckin@...il.com>
Cc:	Eric Sandeen <sandeen@...hat.com>,
	Andreas Dilger <adilger@....com>, linux-ext4@...r.kernel.org
Subject: Re: [Q] ext3 mkfs: zeroing journal blocks

On Tue, May 12, 2009 at 02:55:10PM +0300, Alexander Shishkin wrote:
> On 11 May 2009 21:44, Eric Sandeen <sandeen@...hat.com> wrote:
> > Andreas Dilger wrote:
> >
> >> The reason that the journal is zeroed is because there is some chance
> >> that old (valid at the time) transaction headers and commit blocks might
> >> be in the journal and could accidentally be "recovered" and cause bad
> >> corruption of the filesystem.
> >
> > But I guess the question is, why isn't a normal internal log zeroed?
> >
> > If I'm reading it right only external logs get this treatment, and I
> > think that's what generated the original question from Alexander.
> 
> My concern was basically if it is safe to skip zeroing for internal journal.

Strictly speaking, no.  Most of the time you'll get lucky.  The place
where you will get into trouble will be is if there is leftover
uninitialized garbage (particularly if you are reformatting an
existing ext3/4 filesystem) that looks like a journal log block, with
the correct journal transaction number, *and* the system crashes
before the journal has been completely written through at least once.

What precisely is your concern?  Normally the journal isn't that big,
and it's a contiguous write --- so it doesn't take that long.  Are you
worried about the time it takes, or trying to avoid some writes to an
SSD, or some other concern?  If we know it's an SSD, where reads are
fast, and writes are slow, I suppose we could scan the disk looking
for potentially dangerous blocks and zero them manually.  It's really
not clear it's worth the effort though.

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ