[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100819131017.GV16603@skl-net.de>
Date: Thu, 19 Aug 2010 15:10:17 +0200
From: Andre Noll <maan@...temlinux.org>
To: Ted Ts'o <tytso@....edu>
Cc: Andreas Dilger <adilger@...ger.ca>,
linux-ext4 <linux-ext4@...r.kernel.org>,
Marcus Hartmann <marcus.hartmann@...bingen.mpg.de>
Subject: Re: Memory allocation failed, e2fsck: aborted
On Wed, Aug 18, 20:54, Ted Ts'o wrote:
> On Wed, Aug 18, 2010 at 02:20:13PM -0600, Andreas Dilger wrote:
> >
> > Ah, that is also a major user of memory, and not necessarily one
> > that optimizing the internal bitmap will help significantly. It may
> > well be that your swap cannot be used if a single allocation is in
> > the same neighbourhood as the total RAM size.
>
> Something which *might* help (but will take a long time) is to add to
> your /etc/e2fsck.conf (if you have one; if not create one wiht these
> contents):
>
> [scratch_files]
> directory = /var/cache/fsck
>
> (And then make sure /var/cache/fsck exists.)
Thanks for the hint. It is running for an hour now and I will report
back tomorrow. ATM, it's at 1% and the two files in /var/cache/fsck
are ~50M large.
> Unfortunately, as it turns out tdb (from Samba) doesn't scale as much
> as I would have liked, so it's on my todo to replace this with
> something else. The problem with that berk_db has non-standard
> interfaces and varies from version to version. So I've avoided using
> it up until now.
Silly question: Would it be possible to simply mmap a large enough
file for the data and and use e.g. rbtrees for the lookups? If yes,
osl [1] could probably be an option. It's very simple but likely too
slow on inserts to be useful for e2fsprogs.
> A related blog entry:
>
> http://thunk.org/tytso/blog/2009/01/12/wanted-incremental-backup-solutions-that-use-a-database/
Hey, I read this posting back then, and I agree with what you say.
However, we are quite happy with our hard link based backup and use
it to "snapshot" file systems as large as 16T. Users love that they
can simply copy back an older version of the file they just removed
by accident. Another killer argument for this type of backup is that
you can easily replace a broken system by the machine that stores
the backup. This takes an hour while restoring everything from tapes
takes _weeks_.
But yes, from the file system developer's POV the whole concept of
hard link based backups must be a nightmare ;) And it does not work
well if there are very many files. Unfortunately, this is the case
for the file system in question.
> P.S. I recently was told about a new backup system that meets the
> requirements detailed in my post:
>
> http://sites.google.com/site/hashbackup/home/features
>
> I haven't tried it yet, but it looks interesting. Let me know if you
> do try it and what you think of it.
Looks interesting, but where's the source? I might give it a try for
the problematic file system, but maybe not before next month.
Thanks
Andre
[1] http://systemlinux.org/~maan/osl
--
The only person who always got his work done by Friday was Robinson Crusoe
Download attachment "signature.asc" of type "application/pgp-signature" (190 bytes)
Powered by blists - more mailing lists