lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 12 Nov 2012 11:30:03 -0500
From:	Theodore Ts'o <tytso@....edu>
To:	George Spelvin <linux@...izon.com>
Cc:	linux-ext4@...r.kernel.org
Subject: Re: How full should the inode table be?

On Sun, Nov 11, 2012 at 04:55:12AM -0500, George Spelvin wrote:
> Now, it turns out that I have to rebuild it with 64-bit block numbers
> in order to grow it past 16 TB (wow, was *that* a nasty surprise),
> and I intend to use a somewhat saner bytes/inode ratio.
> 
> (Ignoring the slight space gain, fewer inodes means faster e2fsck.)

Actually, with ext4, we keep track of the last used inode in each
block group, so there isn't a speed gain for using a smaller number of
inodes.  It did make a difference for ext3, but not for ext4.

> I could just use that, so the FS will run out of data blocks at about
> the same time as it runs out of inodes, but I wonder: does the FS benefit
> from more slack in inode allocation?

The file system doesn't actually gain anything one way or another in
terms of slack space in the inode table.  The major downside is that
if you guess wrong, and you have many more smaller files than you had
estimated, there's no way to change the inode ratio afterwards, sort
of backing up and reformatting.  So that's why historically we've
tended to massively overprivision the number of inodes available to
the file system.

Regards,

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists