lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <s6nsjr6cb64.fsf@blaulicht.switch.brux>
Date:	Sun, 19 Jun 2011 13:28:51 +0200
From:	Stephan Boettcher <boettcher@...sik.uni-kiel.de>
To:	Andreas Dilger <adilger@...ger.ca>
Cc:	ext4 development <linux-ext4@...r.kernel.org>
Subject: Re: 2.6.39.1: Intel I340-T4: irq/64-eth3-TxR: page allocation failure. order:1, mode:0x20

Andreas Dilger <adilger@...ger.ca> writes:

> On 2011-06-18, at 11:39 AM, Stephan Boettcher wrote:
>> Andreas Dilger <adilger@...ger.ca> writes:
>>> There are a few places in the ext4 mount that are doing large
>>> allocations. In some places they fall back to vmalloc, so they should
>>> really be done with GFP_NOWARN.
>>> 
>>> A few places don't yet fall back to vmalloc(), which is a problem
>>> with fragmented memory or very large filesystems. We were trying to
>>> test a 192TB ext4 filesystem, but were unable to mount it without
>>> patching the kernel.
>> 
>> :-O ...  my puny 20TB ext4 filesystem did not do something like
>> this, yet.
>
> What sort of experience do you have with using a filesystem > 20TB?
> I don't think there are many users out there yet that are doing this
> today, so it would be great if you could share some data with us.

I will, as soon as something interesting shows up.  Currently it is
offline, I need to buy some hardware for the frontend.

The setup is nfs-md-nbd-md-sata, RAID5², 3*(6*2TB), mostly for backups.  The
aim is to keep some old solid 32-bit servers usefull for a little longer.

Three 32-bit servers each provide a 10TB nbd to the frontend, which
must be 64-bit.  The frontend, that run the outer md-RAID5 on three nbd
was an Atom525, which I had to return to it's original duties last week.

So far I filled it about 25% with backups via rsync.  

I did not observe any problems with the filesystem.  I did run several
fsck, which were surprisingly fast.  The problem I had were of the kind
that I could not login to any of the servers while they were busy
rebuilding the RAID.  This will get solved with a little more networking
gear.

As soon as I get new frontend hardware, I can run some tests, if
somebody tells me what and how to do it.  The data that is currently on
there is expendable.  The tests shall not target performance of any kind,
for obvious reasons.

> So far, we've only been doing testing and benchmarking (mke2fs, e2fsck
> times, IO and metadata load tests, etc) and I don't know that all of
> the "real world" corner cases have been tested yet.

Well, all the real world corner cases will be well out of my reach with
this setup.

-- 
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ