lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d693761e-2f2b-4d8c-ae4f-7f22479f6c0f@default>
Date:	Fri, 10 Jul 2009 08:23:07 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Anthony Liguori <anthony@...emonkey.ws>
Cc:	Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
	npiggin@...e.de, akpm@...l.org, jeremy@...p.org,
	xen-devel@...ts.xensource.com, tmem-devel@....oracle.com,
	alan@...rguk.ukuu.org.uk, linux-mm@...ck.org,
	kurt.hackel@...cle.com, Rusty Russell <rusty@...tcorp.com.au>,
	dave.mccracken@...cle.com, Marcelo Tosatti <mtosatti@...hat.com>,
	sunil.mushran@...cle.com, Avi Kivity <avi@...hat.com>,
	Schwidefsky <schwidefsky@...ibm.com>, chris.mason@...cle.com,
	Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: RE: [RFC PATCH 0/4] (Take 2): transcendent memory ("tmem") for Linux

> > But IMHO this is a corollary of the fundamental difference.  CMM2's
> > is more the "VMware" approach which is that OS's should never have
> > to be modified to run in a virtual environment.  (Oh, but maybe
> > modified just slightly to make the hypervisor a little less
> > clueless about the OS's resource utilization.)
> 
> While I always enjoy a good holy war, I'd like to avoid one 
> here because 
> I want to stay on the topic at hand.

Oops, sorry, I guess that was a bit inflammatory.  What I meant to
say is that inferring resource utilization efficiency is a very
hard problem and VMware (and I'm sure IBM too) has done a fine job
with it; CMM2 explicitly provides some very useful information from
within the OS to the hypervisor so that it doesn't have to infer
that information; but tmem is trying to go a step further by making
the cooperation between the OS and hypervisor more explicit
and directly beneficial to the OS.

> If there was one change to tmem that would make it more 
> palatable, for 
> me it would be changing the way pools are "allocated".  Instead of 
> getting an opaque handle from the hypervisor, I would force 
> the guest to 
> allocate it's own memory and to tell the hypervisor that it's a tmem 
> pool.

An interesting idea but one of the nice advantages of tmem being
completely external to the OS is that the tmem pool may be much
larger than the total memory available to the OS.  As an extreme
example, assume you have one 1GB guest on a physical machine that
has 64GB physical RAM.  The guest now has 1GB of directly-addressable
memory and 63GB of indirectly-addressable memory through tmem.
That 63GB requires no page structs or other data structures in the
guest.  And in the current (external) implementation, the size
of each pool is constantly changing, sometimes dramatically so
the guest would have to be prepared to handle this.  I also wonder
if this would make shared-tmem-pools more difficult.

I can see how it might be useful for KVM though.  Once the
core API and all the hooks are in place, a KVM implementation of
tmem could attempt something like this.

> The big advantage of keeping the tmem pool part of the normal set of 
> guest memory is that you don't introduce new challenges with 
> respect to memory accounting.  Whether or not tmem is directly 
> accessible from the guest, it is another memory resource.  I'm
> certain that you'll want to do accounting of how much tmem is being
> consumed by each guest

Yes, the Xen implementation of tmem does accounting on a per-pool
and a per-guest basis and exposes the data via a privileged
"tmem control" hypercall.

> and I strongly suspect that you'll want to do tmem accounting on a 
> per-process 
> basis.  I also suspect that doing tmem limiting for things 
> like cgroups would be desirable.

This can be done now if each process or cgroup creates a different
tmem pool.  The proposed patch doesn't do this, but it certainly
seems possible.

Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ