lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 12 Jul 2009 09:28:38 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Avi Kivity <avi@...hat.com>
Cc:	Anthony Liguori <anthony@...emonkey.ws>,
	Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
	npiggin@...e.de, akpm@...l.org, jeremy@...p.org,
	xen-devel@...ts.xensource.com, tmem-devel@....oracle.com,
	alan@...rguk.ukuu.org.uk, linux-mm@...ck.org,
	kurt.hackel@...cle.com, Rusty Russell <rusty@...tcorp.com.au>,
	dave.mccracken@...cle.com, Marcelo Tosatti <mtosatti@...hat.com>,
	sunil.mushran@...cle.com, Schwidefsky <schwidefsky@...ibm.com>,
	chris.mason@...cle.com, Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: RE: [RFC PATCH 0/4] (Take 2): transcendent memory ("tmem") for Linux

> > That 63GB requires no page structs or other data structures in the
> > guest.  And in the current (external) implementation, the size
> > of each pool is constantly changing, sometimes dramatically so
> > the guest would have to be prepared to handle this.  I also wonder
> > if this would make shared-tmem-pools more difficult.  
> 
> Having no struct pages is also a downside; for example this 
> guest cannot 
> have more than 1GB of anonymous memory without swapping like mad.  
> Swapping to tmem is fast but still a lot slower than having 
> the memory 
> available.

Yes, true.  Tmem offers little additional advantage for workloads
that have a huge variation in working set size that is primarily
anonymous memory.  That larger scale "memory shaping" is left to
ballooning and hotplug.

> tmem makes life a lot easier to the hypervisor and to the guest, but 
> also gives up a lot of flexibility.  There's a difference 
> between memory 
> and a very fast synchronous backing store.

I don't see that it gives up that flexibility.  System adminstrators
are still free to size their guests properly.  Tmem's contribution
is in environments that are highly dynamic, where the only
alternative is really sizing memory maximally (and thus wasting
it for the vast majority of time in which the working set is smaller).

> > I can see how it might be useful for KVM though.  Once the
> > core API and all the hooks are in place, a KVM implementation of
> > tmem could attempt something like this.
> >    
> 
> My worry is that tmem for kvm leaves a lot of niftiness on the table, 
> since it was designed for a hypervisor with much simpler memory 
> management.  kvm can already use spare memory for backing guest swap, 
> and can already convert unused guest memory to free memory 
> (by swapping 
> it).  tmem doesn't really integrate well with these capabilities.

I'm certainly open to identifying compromises and layer modifications
that help meet the needs of both Xen and KVM (and others).  For
example, if we can determine that the basic hook placement for
precache/preswap (or even just precache for KVM) can be built
on different underlying layers, that would be great!

Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists