[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100819092536.GH2370@arachsys.com>
Date: Thu, 19 Aug 2010 10:25:36 +0100
From: Chris Webb <chris@...chsys.com>
To: Balbir Singh <balbir@...ux.vnet.ibm.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Wu Fengguang <fengguang.wu@...el.com>,
Minchan Kim <minchan.kim@...il.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Subject: Re: Over-eager swapping
Balbir Singh <balbir@...ux.vnet.ibm.com> writes:
> Can you give an idea of what the meminfo inside the guest looks like.
Sorry for the slow reply here. Unfortunately not, as these guests are run on
behalf of customers. They install them with operating systems of their
choice, and run them on our service.
> Have you looked at
> http://kerneltrap.org/mailarchive/linux-kernel/2010/6/8/4580772
Yes, I've been watching this discussions with interest. Our application is
one where we have little to no control over what goes on inside the guests,
but these sorts of things definitely make sense where the two are under the
same administrative control.
> Do we have reason to believe the problem can be solved entirely in the
> host?
It's not clear to me why this should be difficult, given that the total size
of vm allocated to guests (and system processes) is always strictly less
than the total amount of RAM available in the host. I do understand that it
won't allow for as impressive overcommit (except by ksm) or be as efficient,
because file-backed guest pages won't get evicted by pressure in the host as
they are indistinguishable from anonymous pages.
After all, a solution that isn't ideal, but does work, is to turn off swap
completely! This is what we've been doing to date. The only problem with
this is that we can't dip into swap in an emergency if there's no swap there
at all.
Best wishes,
Chris.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists