lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A59AAF1.1030102@redhat.com>
Date:	Sun, 12 Jul 2009 12:20:49 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Dan Magenheimer <dan.magenheimer@...cle.com>
CC:	Anthony Liguori <anthony@...emonkey.ws>,
	Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
	npiggin@...e.de, akpm@...l.org, jeremy@...p.org,
	xen-devel@...ts.xensource.com, tmem-devel@....oracle.com,
	alan@...rguk.ukuu.org.uk, linux-mm@...ck.org,
	kurt.hackel@...cle.com, Rusty Russell <rusty@...tcorp.com.au>,
	dave.mccracken@...cle.com, Marcelo Tosatti <mtosatti@...hat.com>,
	sunil.mushran@...cle.com, Schwidefsky <schwidefsky@...ibm.com>,
	chris.mason@...cle.com, Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: Re: [RFC PATCH 0/4] (Take 2): transcendent memory ("tmem") for Linux

On 07/10/2009 06:23 PM, Dan Magenheimer wrote:
>> If there was one change to tmem that would make it more
>> palatable, for
>> me it would be changing the way pools are "allocated".  Instead of
>> getting an opaque handle from the hypervisor, I would force
>> the guest to
>> allocate it's own memory and to tell the hypervisor that it's a tmem
>> pool.
>>      
>
> An interesting idea but one of the nice advantages of tmem being
> completely external to the OS is that the tmem pool may be much
> larger than the total memory available to the OS.  As an extreme
> example, assume you have one 1GB guest on a physical machine that
> has 64GB physical RAM.  The guest now has 1GB of directly-addressable
> memory and 63GB of indirectly-addressable memory through tmem.
> That 63GB requires no page structs or other data structures in the
> guest.  And in the current (external) implementation, the size
> of each pool is constantly changing, sometimes dramatically so
> the guest would have to be prepared to handle this.  I also wonder
> if this would make shared-tmem-pools more difficult.
>    

Having no struct pages is also a downside; for example this guest cannot 
have more than 1GB of anonymous memory without swapping like mad.  
Swapping to tmem is fast but still a lot slower than having the memory 
available.

tmem makes life a lot easier to the hypervisor and to the guest, but 
also gives up a lot of flexibility.  There's a difference between memory 
and a very fast synchronous backing store.

> I can see how it might be useful for KVM though.  Once the
> core API and all the hooks are in place, a KVM implementation of
> tmem could attempt something like this.
>    

My worry is that tmem for kvm leaves a lot of niftiness on the table, 
since it was designed for a hypervisor with much simpler memory 
management.  kvm can already use spare memory for backing guest swap, 
and can already convert unused guest memory to free memory (by swapping 
it).  tmem doesn't really integrate well with these capabilities.


-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ