[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110509224511.GC15227@parisc-linux.org>
Date: Mon, 9 May 2011 16:45:11 -0600
From: Matthew Wilcox <matthew@....cx>
To: Zdenek Kabelac <zkabelac@...hat.com>
Cc: Mikulas Patocka <mikulas@...ax.karlin.mff.cuni.cz>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-parisc@...r.kernel.org,
Hugh Dickins <hughd@...gle.com>,
Oleg Nesterov <oleg@...hat.com>, agk@...hat.com
Subject: Re: [PATCH] Don't mlock guardpage if the stack is growing up
On Mon, May 09, 2011 at 01:43:59PM +0200, Zdenek Kabelac wrote:
> > Why it doesn't use mlockall()? Because glibc maps all locales to the
> > process. Glibc packs all locales to a 100MB file and maps that file to
> > every process. Even if the process uses just one locale, glibc maps all.
> >
> > So, when LVM used mlockall, it consumed >100MB memory and it caused
> > out-of-memory problems in system installers.
> >
> > So, alternate way of locking was added to LVM --- read all maps and lock
> > them, except for the glibc locale file.
> >
> > The real fix would be to fix glibc not to map 100MB to every process.
>
> I should add here probably few words.
>
> Glibc knows few more ways around - so it could work only with one locale file
> per language, or even without using mmap and allocating them in memory.
> Depends on the distribution usually - Fedora decided to combine all locales
> into one huge file (>100MB) - Ubuntu/Debian mmaps each locales individually
> (usually ~MB)
Sounds to me like glibc should introduce an mlockmost() call that does all
the work for you ...
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists