[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180903095305.GF10027@quack2.suse.cz>
Date: Mon, 3 Sep 2018 11:53:05 +0200
From: Jan Kara <jack@...e.cz>
To: Liu Bo <obuil.liubo@...il.com>
Cc: linux-ext4@...r.kernel.org, Jan Kara <jack@...e.cz>,
"Theodore Y. Ts'o" <tytso@....edu>
Subject: Re: metadata overhead
Hi,
On Sun 02-09-18 23:58:56, Liu Bo wrote:
> My question is,
> Is there a way to calculate how much space metadata has occupied?
>
> So the case I've run into is that 'df /mnt' shows
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/sdc 882019232 26517428 811356348 4% /mnt
>
> but 'du -s /mnt' shows
> 13347132 /mnt
>
> And this is a freshly mounted ext4, so no deleted files or dirty data exist.
>
> The kernel is quite old (2.6.32), but I was just wondering, could it
> be due to metadata using about 13G given the whole filesystem is 842G?
Yes, that sounds plausible.
> I think it has nothing to do with "Reserved block counts" as df
> calculates "Used" in ext4_statfs() by "buf->f_blocks - buf->f_bfree".
>
> So if there is a way to know the usage of metadata space, via either
> manual analysis from the output of dumpe2fs/debugfs or a tool, could
> you please suggest?
So journal takes up some space. Debugfs command:
stat <8>
Inode table takes lots of blocks:
stats
search for "Inode count", multiply by "Inode size". Then there are bitmap
blocks - count 2 blocks for each group. The rest of overhead should be
pretty minimal.
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists