lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Apr 2012 21:27:07 -0400
From:	Ted Ts'o <>
To:	Jan Kara <>
Cc:	Andreas Dilger <>,
	Zheng Liu <>,
	Andreas Dilger <>,
	"" <>,
	"" <>
Subject: Re: [RFC] jbd2: reduce the number of writes when commiting a

On Tue, Apr 24, 2012 at 11:57:09PM +0200, Jan Kara wrote:
>   Also currently the async commit code has essentially unfixable bugs in
> handling of cache flushes as I wrote in
> Because data blocks
> are not part of journal checksum, it can happen with async commit code that
> data is not safely on disk although transaction is completely committed. So
> async commit code isn't really safe to use unless you are fine with
> exposure of uninitialized data...

With the old journal checksum, the data blocks *are* part of the
journal checksum.  That's not the reason I haven't enabled it as a
default (even though it would close to double fs_mark benchmarks).
The main issue is that e2fsck doesn't deal intelligently if some
commit *other* than the last one has a bad intelligent.

With the new journal checksum patches, each individual data block has
its own checksum, so we don't need to discard the entire commit;
instead we can just drop the individual block(s) that have a bad
checksum, and then force a full fsck run afterwards.

						- Ted
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists