lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <D46B0853-722D-4337-9561-42CECF7DAF47@dilger.ca>
Date:	Mon, 13 Dec 2010 14:57:26 -0700
From:	Andreas Dilger <adilger@...ger.ca>
To:	Stephan Boettcher <boettcher@...sik.uni-kiel.de>
Cc:	ext4 development <linux-ext4@...r.kernel.org>
Subject: Re: 20TB ext4


On 2010-12-13, at 09:23, Stephan Boettcher wrote:
> A raid1 (/dev/md1) over three 20GB partitions is the root filesystem,
> three 20GB partitions for swap, and a RAID5 (/dev/md0) from the six big
> partitions.
> 
> The 10TB /dev/md0 is exported via nbd.  I had to patch nbd-client to
> import this on a 32-bit machine, so that part works.
> 
> The intention was to export two (later three) via nbd to one of the
> servers, which combines them to a RAID5² with net capacity 20TB.  With
> e2fsprogs master branch I could make a filesystem, but dumpe2fs and
> fsck failed.  Mounting the filesystem said: EFBIG.

RAID-5 on top of RAID-5 is going to be VERY SLOW...  Also note that only a single "nbd client" system will be able to use this storage at one time.  If you have dedicated server nodes, and you want to be able to use these 20TB from multiple clients, you might consider using Lustre, which uses ext4 as the back-end storage, and can scale to many PB filesystems (largest known filesystem is 20PB, from 1344 * 8TB separate ext4 filesystems).

> Obviously, with 32-bit pgoff_t this will not work, and it was said
> elsewhere that making pgoff_t 64-bit on i386 will require a lot of faith
> and luck, since there are more than 3000 unsigned longs in the fs tree.

I don't think that is going to happen any time soon.  Lustre _can_ export from a 32-bit server, though it definitely isn't very common anymore.  For the cost of a single 2TB drive you can likely get a new motherboard + 64-bit CPU + RAM...

> I'd prefer to run the setup selfcontained without an extra 64-bit head.
> Maybe I will partition it down to a 16TB and a 4TB partition.  Maybe I
> just dare to compile a kernel with typedef unsigned long long pgoff_t
> and see what happens, maybe I can help fixing that kind of configuration.

I would suggest you examine what it is you are really trying to get out of this system?  Is it just for fun, to test ext4 with > 16TB filesystems?  Great, you can probably do that with the 64-bit nbd client.  Do you actually want to use this for some data you care about?  Then trying to get 32-bit kernels to handle > 16TB block devices is a risky strategy to take for a few hundred USD.  Given that you are willing to spend a few thousand USD for the 2TB drives, you should consider just getting a 64-bit CPU + RAM to handle it.

Also note that running e2fsck on such a large filesystem will need 6-8GB of RAM at a minimum, and can be a lot more if there are serious problems (e.g. duplicate blocks).  Recently I saw a report of 22GB of RAM needed for e2fsck to complete, which is just impossible on a 32-bit machine.


Cheers, Andreas





--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ