[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110328174250.GE8529@home.goodmis.org>
Date: Mon, 28 Mar 2011 13:42:50 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: John Lepikhin <johnlepikhin@...il.com>
Cc: linux-kernel@...r.kernel.org,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-fsdevel@...r.kernel.org
Subject: Re: Very aggressive memory reclaim
[ Add Cc's of those that may help you ]
-- Steve
On Mon, Mar 28, 2011 at 08:39:29PM +0400, John Lepikhin wrote:
> Hello,
>
> I use high-loaded machine with 10M+ inodes inside XFS, 50+ GB of
> memory, intensive HDD traffic and 20..50 forks per second. Vanilla
> kernel 2.6.37.4. The problem is that kernel frees memory very
> aggressively.
>
> For example:
>
> 25% of memory is used by processes
> 50% for page caches
> 7% for slabs, etc.
> 18% free.
>
> That's bad but works. After few hours:
>
> 25% of memory is used by processes
> 62% for page caches
> 7% for slabs, etc.
> 5% free.
>
> Most of files are cached, works perfectly. This is the moment when
> kernel decides to free some memory. After memory reclaim:
>
> 25% of memory is used by processes
> 25% for page caches(!)
> 7% for slabs, etc.
> 43% free(!)
>
> Page cache is dropped, server becomes too slow. This is the beginning
> of new cycle.
>
> I didn't found any huge mallocs at that moment. Looks like because of
> large number of small mallocs (forks) kernel have pessimistic forecast
> about future memory usage and frees too much memory. Is there any
> options of tuning this? Any other variants?
>
> Thanks!
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists