[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090622132702.6638d841@skybase>
Date: Mon, 22 Jun 2009 13:27:02 +0200
From: Martin Schwidefsky <schwidefsky@...ibm.com>
To: Dan Magenheimer <dan.magenheimer@...cle.com>
Cc: linux-kernel@...r.kernel.org, xen-devel@...ts.xensource.com,
npiggin@...e.de, chris.mason@...cle.com, kurt.hackel@...cle.com,
dave.mccracken@...cle.com, Avi Kivity <avi@...hat.com>,
jeremy@...p.org, Rik van Riel <riel@...hat.com>,
alan@...rguk.ukuu.org.uk, Rusty Russell <rusty@...tcorp.com.au>,
akpm@...l.org, Marcelo Tosatti <mtosatti@...hat.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
tmem-devel@....oracle.com, sunil.mushran@...cle.com,
linux-mm@...ck.org, Himanshu Raj <rhim@...rosoft.com>
Subject: Re: [RFC] transcendent memory for Linux
On Fri, 19 Jun 2009 16:53:45 -0700 (PDT)
Dan Magenheimer <dan.magenheimer@...cle.com> wrote:
> Tmem has some similarity to IBM's Collaborative Memory Management,
> but creates more of a partnership between the kernel and the
> "privileged entity" and is not very invasive. Tmem may be
> applicable for KVM and containers; there is some disagreement on
> the extent of its value. Tmem is highly complementary to ballooning
> (aka page granularity hot plug) and memory deduplication (aka
> transparent content-based page sharing) but still has value
> when neither are present.
The basic idea seems to be that you reduce the amount of memory
available to the guest and as a compensation give the guest some
tmem, no? If that is the case then the effect of tmem is somewhat
comparable to the volatile page cache pages.
The big advantage of this approach is its simplicity, but there
are down sides as well:
1) You need to copy the data between the tmem pool and the page
cache. At least temporarily there are two copies of the same
page around. That increases the total amount of used memory.
2) The guest has a smaller memory size. Either the memory is
large enough for the working set size in which case tmem is
ineffective, or the working set does not fit which increases
the memory pressure and the cpu cycles spent in the mm code.
3) There is an additional turning knob, the size of the tmem pool
for the guest. I see the need for a clever algorithm to determine
the size for the different tmem pools.
Overall I would say its worthwhile to investigate the performance
impacts of the approach.
--
blue skies,
Martin.
"Reality continues to ruin my life." - Calvin.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists