lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 4 Sep 2018 11:56:04 -0700
From:   Liu Bo <obuil.liubo@...il.com>
To:     Jan Kara <jack@...e.cz>
Cc:     linux-ext4@...r.kernel.org, "Theodore Y. Ts'o" <tytso@....edu>
Subject: Re: metadata overhead

On Tue, Sep 4, 2018 at 12:52 AM, Jan Kara <jack@...e.cz> wrote:
> On Mon 03-09-18 17:20:21, Liu Bo wrote:
>> On Mon, Sep 3, 2018 at 2:53 AM, Jan Kara <jack@...e.cz> wrote:
>> > On Sun 02-09-18 23:58:56, Liu Bo wrote:
>> >> My question is,
>> >> Is there a way to calculate how much space metadata has occupied?
>> >>
>> >> So the case I've run into is that 'df /mnt' shows
>> >>
>> >> Filesystem           1K-blocks      Used Available Use% Mounted on
>> >> /dev/sdc              882019232  26517428 811356348   4% /mnt
>> >>
>> >> but 'du -s /mnt' shows
>> >> 13347132        /mnt
>> >>
>> >> And this is a freshly mounted ext4, so no deleted files or dirty data exist.
>> >>
>> >> The kernel is quite old (2.6.32), but I was just wondering, could it
>> >> be due to metadata using about 13G given the whole filesystem is 842G?
>> >
>> > Yes, that sounds plausible.
>> >
>> >> I think it has nothing to do with "Reserved block counts" as df
>> >> calculates "Used" in ext4_statfs() by "buf->f_blocks - buf->f_bfree".
>> >>
>> >> So if there is a way to know the usage of metadata space, via either
>> >> manual analysis from the output of dumpe2fs/debugfs or a tool, could
>> >> you please suggest?
>> >
>> > So journal takes up some space. Debugfs command:
>> >
>> > stat <8>
>> >
>> > Inode table takes lots of blocks:
>> >
>> > stats
>> >
>> > search for "Inode count", multiply by "Inode size". Then there are bitmap
>> > blocks - count 2 blocks for each group. The rest of overhead should be
>> > pretty minimal.
>> >
>>
>> Thank you so much for the reply, Jan.
>>
>> Per what you've mentioned, the journal + inode table have taken >14G
>> in this ext4, so that's a lot of space, good.
>>
>> And I digged it further, I found that the overhead from (journal +
>> inode_table + block_bitmap) has been excluded in the output of 'df' as
>> ext4_statfs() gets buf->f_blocks by
>>
>> buf->f_blocks = ext4_blocks_count(es) - EXT4_C2B(sbi, overhead);
>>
>>
>> and ->f_blocks is shown as "Total", but there is still some gap
>> between "Used" in df (26517428 * 1024) and the summary report of "du
>> -s" (13347132 * 1024),
>
> Correct, I forgot about this.
>
>> --------
>> # df /mnt
>> Filesystem           1K-blocks      Used Available Use% Mounted on
>> /dev/sdc              882019232  26517428 811356348   4% /mnt
>>
>> #du -s /mnt
>> 13347132        /mnt
>> --------
>>
>> Now I'm even more curious, any idea where could those gap come from?
>
> Interesting question. So e.g. for my test filesystem, the gap is caused by
> 'resize_inode' (inode number 7), which is reserving blocks at the beginning
> of block groups to allow for growing of group descriptors. So check whether
> you have resize_inode feature enabled (stats command in debugfs).

You're right, "resize_inode" is enabled, and I found on 4.18 at least
two things take some space in an empty ext4, i.e. 'resize_inode' and
reserved blocks for metadata(2% or 16K).

thanks,
liubo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ