lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 15 Mar 2012 13:55:36 -0400
From:	Phillip Susi <psusi@...ntu.com>
To:	Andreas Dilger <adilger@...ger.ca>
CC:	ext4 development <linux-ext4@...r.kernel.org>
Subject: Re: Status of META_BG?

On 3/15/2012 12:25 PM, Andreas Dilger wrote:
> In the case of very large filesystems (256TB or more, assuming 4kB
> block size) the group descriptor blocks will grow to fill an entire
> block group, and in the case of group 0 and group 1 they would start
> overlapping, which would not work.

To get an fs that large, you have to enable 64bit support, which also 
means you can pass the limit of 32k blocks per group.  Doing that should 
allow for a much more reasonable number of groups ( which is a good 
thing several reasons ), and would also solve this problem wouldn't it?

> META_BG addresses both of these issues by distributing the group
> descriptor blocks into the filesystem for each "meta group" (= the
> number of groups whose descriptors fit into a single block).

So it puts one GD block at the start of every several block groups? 
Wouldn't that drastically slow down opening/mounting the fs since the 
disk has to seek to every block group?

Perhaps if it were coupled with flex_bg so that flex_factor GD blocks 
would be clustered that would mitigate that somewhat, but iirc the 
default flex factor is only 16 so that might need bumped up for such 
large disks.

> The number of backups is reduced (0 - 3 backups), and the blocks do
> not need to be contiguous anymore.

You know, I've been wondering why the group descriptors are backed up in 
the first place.  If the backups are only ever written at mkfs time, and 
can be reconstructed with mke2fs -S, then what purpose do they serve?

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ