lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Feb 2011 15:24:18 -0700
From:	Andreas Dilger <adilger@...ger.ca>
To:	Rogier Wolff <R.E.Wolff@...wizard.nl>
Cc:	Theodore Tso <tytso@....EDU>, linux-ext4@...r.kernel.org
Subject: Re: fsck performance.

On 2011-02-23, at 1:53 PM, Rogier Wolff wrote:
> My implementation has been a "cleanroom" implementation in that I've
> only looked at the specifications and implemented it from
> there. Although no external attestation is available that I have been
> completely shielded from the newer GPLv3 version... 
> 
> On a slightly different note: 
> 
> A pretty good estimate of the number of inodes is available in the
> superblock (tot inodes - free inodes). A good hash size would be: "a
> rough estimate of the number of inodes." Two or three times more or
> less doesn't matter much. CPU is cheap. I'm not sure what the
> estimate for the "dircount" tdb should be.

The dircount can be extracted from the group descriptors, which count the number of allocated directories in each group.  Since the superblock "free inodes" count is no longer updated except at unmount time, the code would need to walk all of the group descriptors to get this number anyway.

> The amount of disk space that the tdb will use is at least: 
>  overhead + hash_size * 4 + numrecords * (keysize + datasize +
>                                                 perrecordoverhead)
> 
> There must also be some overhead to store the size of the keys and
> data as both can be variable length. By implementing the "database"
> ourselves we could optimize that out. I don't think it's worth the
> trouble. 
> 
> With keysize equal 4, datasize also 4 and hash_size equal to numinodes
> or numrecords, we would get
> 
> overhead + numinodes * (12 + perrecordoverhead). 
> 
> In fact, my icount database grew to about 750Mb, with only 23M inodes,
> so that means that apparently the perrecordoverhead is about 20 bytes.
> This is the price you pay for using a much more versatile database
> than what you really need. Disk is cheap (except when checking a root
> filesystem!)
> 
> So... 
> 
> -- I suggest that for the icount tdb we move to using the superblock
> info as the hash size.
> 
> -- I suggest that we use our own hash function. tdb allows us to
> specify our own hash function. Instead of modifying the bad tdb, we'll
> just keep it intact, and pass a better (local) hash function.
> 
> 
> Does anybody know what the "dircount" tdb database holds, and what is
> an estimate for the number of elements eventually in the database?  (I
> could find out myself: I have the source. But I'm lazy. I'm a
> programmer you know...).
> 
> 
> On a separate note, my filesystem finished the fsck (33 hours (*)),
> and I started the backups again... :-)

If you have the opportunity, I wonder whether the entire need for tdb can be avoided in your case by using swap and the icount optimization patches previously posted?  I'd really like to get that patch included upstream, but it needs testing in an environment like yours where icount is a significant factor.  This would avoid all of the tdb overhead.

Cheers, Andreas





--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ