lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20130718001307.GC5790@blackbox.djwong.org>
Date:	Wed, 17 Jul 2013 17:13:07 -0700
From:	"Darrick J. Wong" <darrick.wong@...cle.com>
To:	"Theodore Ts'o" <tytso@....edu>
Cc:	linux-ext4 <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH] ext4: Prevent massive fs corruption if verifying the
 block bitmap fails

On Wed, Jul 17, 2013 at 03:55:03PM -0400, Theodore Ts'o wrote:
> On Wed, Jul 17, 2013 at 12:43:56PM -0700, Darrick J. Wong wrote:
> > I also wrote a script that fills a fs, maliciously marks all the fs metadata
> > blocks as free, and writes more files to the fs, with the result that you
> > corrupt the metadata.  I wonder if it's feasible to modify mballoc to check
> > that it's not handing out well known metadata locations to files?
> 
> We have that --- it's the block_validity mount option.  I use it
> regularly for testing.  It's off by default because it does take a bit

Aha, I thought so. :)

> more CPU time for every single block allocation and deallocation.  It
> would be useful if someone who had access to fast PCIe-attached flash
> tried to measure the CPU utilization of a metadata-intensive workload
> (such as fs_mark) with and without block_validity.  If the overhead is
> negligible, we could enable this by default, and remove the mount
> option.

Well, I don't have a fancy PCIe SSD, but I do have some RAM.  I wrote a program
that simulates the allocation behavior of unpacking a kernel tarball 14 times
via fallocate, and ran it 16 times.  The columns are test-name, elapsed time,
user time, and system time, all in seconds.  Kernel is 3.10, e2fsprogs is
1.43-WIP from last week.

tmpfs file losetup'd:

block_validity,tar:    9.52 0.56 8.31
no_block_validity,tar: 9.47 0.56 8.26

block_validity,del:    7.57 0.30 6.96
no_block_validity,del: 7.56 0.31 6.95

Boring laptop mSATA SSD:

block_validity,tar:    15.24 0.64 8.37
no_block_validity,tar: 14.85 0.63 8.30

block_validity,del:     9.00 0.29 7.06
no_block_validity,del:  9.09 0.30 7.12

Encrypted external USB2 HDD:

block_validity,tar:    59.23 0.62 8.55
no_block_validity,tar: 59.51 0.67 8.51

block_validity,del:    21.05 0.33 7.46
no_block_validity,del: 21.37 0.32 7.71

The allocation test spent 0.6%, 0.8%, and 0.5% more kernel time.  I'm not sure
why the delete test speeds up that much, though.

I can go run mailserver or fsmark or something too if you want.

--D
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists