lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 12 Jul 2009 09:20:22 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Anthony Liguori <anthony@...emonkey.ws>
Cc:	Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
	npiggin@...e.de, akpm@...l.org, jeremy@...p.org,
	xen-devel@...ts.xensource.com, tmem-devel@....oracle.com,
	alan@...rguk.ukuu.org.uk, linux-mm@...ck.org,
	kurt.hackel@...cle.com, Rusty Russell <rusty@...tcorp.com.au>,
	dave.mccracken@...cle.com, Marcelo Tosatti <mtosatti@...hat.com>,
	sunil.mushran@...cle.com, Avi Kivity <avi@...hat.com>,
	Schwidefsky <schwidefsky@...ibm.com>, chris.mason@...cle.com,
	Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: RE: [RFC PATCH 0/4] (Take 2): transcendent memory ("tmem") for Linux

> > that information; but tmem is trying to go a step further by making
> > the cooperation between the OS and hypervisor more explicit
> > and directly beneficial to the OS.
> 
> KVM definitely falls into the camp of trying to minimize 
> modification to the guest.

No argument there.  Well, maybe one :-) Yes, but KVM
also heavily encourages unmodified guests.  Tmem is
philosophically in favor of finding a balance between
things that work well with no changes to any OS (and
thus work just fine regardless of whether the OS is
running in a virtual environment or not), and things
that could work better if the OS is knowledgable that
it is running in a virtual environment.

For those that believe virtualization is a flash-in-
the-pan, no modifications to the OS is the right answer.
For those that believe it will be pervasive in the
future, finding the right balance is a critical step
in operating system evolution.

(Sorry for the Sunday morning evangelizing :-)

> >> If there was one change to tmem that would make it more 
> >> palatable, for 
> >> me it would be changing the way pools are "allocated".  Instead of 
> >> getting an opaque handle from the hypervisor, I would force 
> >> the guest to 
> >> allocate it's own memory and to tell the hypervisor that 
> it's a tmem 
> >> pool.
> >
> > I can see how it might be useful for KVM though.  Once the
> > core API and all the hooks are in place, a KVM implementation of
> > tmem could attempt something like this.
> 
> It's the core API that is really the issue.  The semantics of tmem 
> (external memory pool with copy interface) is really what is 
> problematic.
> The basic concept, notifying the VMM about memory that can be 
> recreated 
> by the guest to avoid the VMM having to swap before reclaim, is great 
> and I'd love to see Linux support it in some way.

Is it the tmem API or the precache/preswap API layered on
top of it that is problematic?  Both currently assume copying
but perhaps the precache/preswap API could, with minor
modifications, meet KVM's needs better?

> > Yes, the Xen implementation of tmem does accounting on a per-pool
> > and a per-guest basis and exposes the data via a privileged
> > "tmem control" hypercall.
> 
> I was talking about accounting within the guest.  It's not 
> just a matter 
> of accounting within the mm, it's also about accounting in 
> userspace.  A 
> lot of software out there depends on getting detailed statistics from 
> Linux about how much memory is in use in order to determine 
> things like 
> memory pressure.  If you introduce a new class of memory, you 
> need a new 
> class of statistics to expose to userspace and all those tools need 
> updating.

OK, I see.

Well, first, tmem's very name means memory that is "beyond the
range of normal perception".  This is certainly not the first class
of memory in use in data centers that can't be accounted at
process granularity.  I'm thinking disk array caches as the
primary example.  Also lots of tools that work great in a
non-virtualized OS are worthless or misleading in a virtual
environment.

Second, CPUs are getting much more complicated with massive
pipelines, many layers of caches each with different characteristics,
etc, and its getting increasingly impossible to accurately and
reproducibly measure performance at a very fine granularity.
One could only expect that other resources, such as memory,
would move in that direction.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ