lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A567E3B.90609@codemonkey.ws>
Date:	Thu, 09 Jul 2009 18:33:15 -0500
From:	Anthony Liguori <anthony@...emonkey.ws>
To:	Dan Magenheimer <dan.magenheimer@...cle.com>
CC:	Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
	npiggin@...e.de, akpm@...l.org, jeremy@...p.org,
	xen-devel@...ts.xensource.com, tmem-devel@....oracle.com,
	alan@...rguk.ukuu.org.uk, linux-mm@...ck.org,
	kurt.hackel@...cle.com, Rusty Russell <rusty@...tcorp.com.au>,
	dave.mccracken@...cle.com, Marcelo Tosatti <mtosatti@...hat.com>,
	sunil.mushran@...cle.com, Avi Kivity <avi@...hat.com>,
	Schwidefsky <schwidefsky@...ibm.com>, chris.mason@...cle.com,
	Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: Re: [RFC PATCH 0/4] (Take 2): transcendent memory ("tmem") for Linux

Dan Magenheimer wrote:
> But this means that either the content of that page must have been
> preserved somewhere or the discard fault handler has sufficient
> information to go back and get the content from the source (e.g.
> the filesystem).  Or am I misunderstanding?
>   

As Rik said, it's the later.

> With tmem, the equivalent of the "failure to access a discarded page"
> is inline and synchronous, so if the tmem access "fails", the
> normal code immediately executes.
>   

Yup.  This is the main difference AFAICT.  It's really just API 
semantics within Linux.

You could clearly use the volatile state of CMM2 to implement tmem as an 
API in Linux.  The get/put functions would set a flag such that if the 
discard handler was invoked as long as that operation happened, the 
operation could safely fail.  That's why I claimed tmem is a subset of CMM2.

> I suppose changing Linux to utilize the two tmem services
> as described above is a semantic change.  But to me it
> seems no more of a semantic change than requiring a new
> special page fault handler because a page of memory might
> disappear behind the OS's back.
>
> But IMHO this is a corollary of the fundamental difference.  CMM2's
> is more the "VMware" approach which is that OS's should never have
> to be modified to run in a virtual environment.  (Oh, but maybe
> modified just slightly to make the hypervisor a little less
> clueless about the OS's resource utilization.)

While I always enjoy a good holy war, I'd like to avoid one here because 
I want to stay on the topic at hand.

If there was one change to tmem that would make it more palatable, for 
me it would be changing the way pools are "allocated".  Instead of 
getting an opaque handle from the hypervisor, I would force the guest to 
allocate it's own memory and to tell the hypervisor that it's a tmem 
pool.  You could then introduce semantics about whether the guest was 
allowed to directly manipulate the memory as long as it was in the 
pool.  It would be required to access the memory via get/put functions 
that under Xen, would end up being a hypercall and a copy.  Presumably 
you would do some tricks with ballooning to allocate empty memory in Xen 
and then use those addresses as tmem pools.  On KVM, we could do 
something more clever.

The big advantage of keeping the tmem pool part of the normal set of 
guest memory is that you don't introduce new challenges with respect to 
memory accounting.  Whether or not tmem is directly accessible from the 
guest, it is another memory resource.  I'm certain that you'll want to 
do accounting of how much tmem is being consumed by each guest, and I 
strongly suspect that you'll want to do tmem accounting on a per-process 
basis.  I also suspect that doing tmem limiting for things like cgroups 
would be desirable.

That all points to making tmem normal memory so that all that 
infrastructure can be reused.  I'm not sure how well this maps to Xen 
guests, but it works out fine when the VMM is capable of presenting 
memory to the guest without actually allocating it (via overcommit).

Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ