[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140427043537.GC25172@thunk.org>
Date: Sun, 27 Apr 2014 00:35:37 -0400
From: Theodore Ts'o <tytso@....edu>
To: Ext4 Developers List <linux-ext4@...r.kernel.org>
Cc: Dmitry Monakhov <dmonakhov@...nvz.org>
Subject: Re: [PATCH] resize2fs: fix overly-pessimistic calculation of minimum
size required
On Sat, Apr 26, 2014 at 10:48:14PM -0400, Theodore Ts'o wrote:
> For extent-mapped file systems, we need to reserve some extra space in
> case we need to grow the extent tree. Calculate the safety margin
> more intelligently, so we don't overestimate the amount of space
> required.
>
> Signed-off-by: "Theodore Ts'o" <tytso@....edu>
> Reported-by: Dmitry Monakhov <dmonakhov@...nvz.org>
I'm going to have to self-NACK this. This patch causes the resize2fs
regression tests to fail. (In fact, Dmitry's original patch also
causes the resize2fs regression tests to fail.)
The problem is kind of messy; when the file system starts at some
insanely large size, and we shrink it very small, we end up releasing
a lot of inode tables in the first block group (many for other block
groups). But until we're 100% sure the resize will be successful, we
don't want to start overwriting those inode table blocks.
For this reason, if we try to constrain resize the file system down
from 2TB to 512MB in one shot, we need to do this in multiple steps.
I.e. by calling "resize2fs -M /dev/sdXX" multiple times.
There really isn't a good way around this, and in fact, if people are
going to be doing silly things like take a file system from 16T down
to 750MB, if they need to run resize2fs multiple times, that's fine.
It would be nice if you could shrink the file system down in a single
shot, but it's not high priority.
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists