[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070917131526.e8db80fe.akpm@linux-foundation.org>
Date: Mon, 17 Sep 2007 13:15:26 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Rik van Riel <riel@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Anton Altaparmakov <aia21@....ac.uk>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
marc.smith@...ail.mcc.edu
Subject: Re: VM/VFS bug with large amount of memory and file systems?
On Mon, 17 Sep 2007 13:11:14 -0400
Rik van Riel <riel@...hat.com> wrote:
> > I'm guessing there is no pressure at all on zone_highmem so the
> > kernel will not try to reclaim pagecache. And because the pagecache
> > pages are happily sitting there, the buggerheads are pinned and do not
> > get reclaimed.
yeah, this got pretty unavoidably broken when we killed the global LRU
in 2.5.early. It's odd that it took this long for someone to hit it.
> I've got code for this in RHEL 3, but never bothered to
> merge it upstream since I thought people with large memory
> systems would be running 64 bit kernels by now.
>
> Obviously I was wrong. Andrew, are you interested in a
> fix for this problem?
>
> IIRC I simply kept a list of all buffer heads and walked
> that to reclaim pages when the number of buffer heads is
> too high (and we need memory). This list can be maintained
> in places where we already hold the lock for the buffer head
> freelist, so there should be no additional locking overhead
> (again, IIRC).
Christoph's slab defragmentation code should permit us to fix this:
grab a page of buffer_heads off the slab lists, trylock the page,
strip the buffer_heads. I think that would be a better approach
if we can get it going because it's more general.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists