lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53D3B12C.5040703@fnarfbargle.com>
Date:	Sat, 26 Jul 2014 21:46:20 +0800
From:	Brad Campbell <lists2009@...rfbargle.com>
To:	Theodore Ts'o <tytso@....edu>
CC:	Azat Khuzhin <a3at.mail@...il.com>, linux-ext4@...r.kernel.org
Subject: Re: Online resize issue with 3.13.5 & 3.15.6


On 26/07/14 20:45, Theodore Ts'o wrote:
> OK, it looks like the e2fsprogs patch got you through the first
> hurdle, but the failure is something that made no sense at first:
>
>> [489412.650430] EXT4-fs (md0): resizing filesystem from 5804916736 to
>> 5860149888 blocks
>> [489412.700282] EXT4-fs warning (device md0): verify_reserved_gdb:713:
>> reserved GDT 2769 missing grp 177147 (5804755665)
> The code path which emitted the above warning something that should
> ever be entered for file systems greater than 16TB.  But then I took a
> look at the first message that you sent on this thread, and I think
> see what's going wrong.  From your dumpe2fs -h output:
>
> Filesystem features:      has_journal ext_attr resize_inode dir_index filetype
> extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink
> extra_isize
> Block count:              5804916736
> Reserved GDT blocks:      585
>
> If the block count is greater than 2**32 (4294967296), resize_inode
> must not be set, and reserved GDT blocks should be zero.  So this is
> definitely not right.
>
> I'm going to guess that this file system was originally a smaller size
> (and probably smaller than 16T), and then was resized to 22TB,
> probably using an earlier version of the kernel and/or e2fsprogs.  Is
> my guess correct?  If so, do you remember the history of what size the
> file system was, and in what steps it was resized, and what version of
> the e2fsprogs and the kernel that was used at each stage, starting
> from the original mke2fs and each successive resize?
>
This was the first resize of this FS. Initially this array was about 
15T. About 12 months ago I attempted to resize it up to 19T and bumped 
up against the fact I had not created the initial filesystem with 64 bit 
support, so I cobbled together some storage and did a 
backup/create/restore. At that point I would probably have specified 
resize_inode manually as an option (as reading the man page it looked 
like a good idea as I always had plans to expand in future) to mke2fs 
along with 64bit. Fast forward 12 months and I've added 2 drives to the 
array and bumped up against this issue. So it was initially 4883458240 
blocks. It would have been created with e2fsprogs from Debian Stable (so 
1.42.5).

I can't test this to verify my memory however as I don't seem to be able 
to create a sparse file large enough to create a filesystem in. I appear 
to be bumping up against a 2T filesize limit.

-- 
Dolphins are so intelligent that within a few weeks they can
train Americans to stand at the edge of the pool and throw them
fish.

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ