[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5139B214.3040303@symas.com>
Date: Fri, 08 Mar 2013 01:40:36 -0800
From: Howard Chu <hyc@...as.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
CC: Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
linux-kernel <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Subject: Re: mmap vs fs cache
Kirill A. Shutemov wrote:
> On Thu, Mar 07, 2013 at 11:46:39PM -0800, Howard Chu wrote:
>> You're misreading the information then. slapd is doing no caching of
>> its own, its RSS and SHR memory size are both the same. All it is
>> using is the mmap, nothing else. The RSS == SHR == FS cache, up to
>> 16GB. RSS is always == SHR, but above 16GB they grow more slowly
>> than the FS cache.
>
> It only means, that some pages got unmapped from your process. It can
> happned, for instance, due page migration. There's nothing worry about: it
> will be mapped back on next page fault to the page and it's only minor
> page fault since the page is in pagecache anyway.
Unfortunately there *is* something to worry about. As I said already - when
the test spans 30GB, the FS cache fills up the rest of RAM and the test is
doing a lot of real I/O even though it shouldn't need to. Please, read the
entire original post before replying.
There is no way that a process that is accessing only 30GB of a mmap should be
able to fill up 32GB of RAM. There's nothing else running on the machine, I've
killed or suspended everything else in userland besides a couple shells
running top and vmstat. When I manually drop_caches repeatedly, then
eventually slapd RSS/SHR grows to 30GB and the physical I/O stops.
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists