lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 Jan 2016 08:05:06 -0800
From:	James Bottomley <James.Bottomley@...senPartnership.com>
To:	Rik van Riel <riel@...hat.com>, lsf-pc@...ts.linuxfoundation.org
Cc:	Linux Memory Management List <linux-mm@...ck.org>,
	Linux kernel Mailing List <linux-kernel@...r.kernel.org>,
	KVM list <kvm@...r.kernel.org>
Subject: Re: [Lsf-pc] [LSF/MM TOPIC] VM containers

On Fri, 2016-01-22 at 10:56 -0500, Rik van Riel wrote:
> Hi,
> 
> I am trying to gauge interest in discussing VM containers at the
> LSF/MM
> summit this year. Projects like ClearLinux, Qubes, and others are all
> trying to use virtual machines as better isolated containers.
> 
> That changes some of the goals the memory management subsystem has,
> from "use all the resources effectively" to "use as few resources as
> necessary, in case the host needs the memory for something else".
> 
> These VMs could be as small as running just one application, so this
> goes a little further than simply trying to squeeze more virtual
> machines into a system with frontswap and cleancache.
> 
> Single-application VM sandboxes could also get their data
> differently,
> using (partial) host filesystem passthrough, instead of a virtual
> block device. This may change the relative utility of caching data
> inside the guest page cache, versus freeing up that memory and
> allowing the host to use it to cache things.
> 
> Are people interested in discussing this at LSF/MM, or is it better
> saved for a different forum?

Actually, I don't really think this is a container technology topic,
but I'm only objecting to the title not the content.  I don't know
Qubes, but I do know clearlinux ... it's VM based.  I think the
question that really needs answering is whether we can improve the
paravirt interfaces for memory control in VMs.  The biggest advantage
containers have over hypervisors is that the former know exactly what's
going on with the memory in the guests because of the shared kernel and
the latter have no real clue, because of the separate guest kernel
which only communicates with the host via hardware interfaces, which
leads to all sorts of bad scheduling decisions.

If I look at the current state of play, it looks like Hypervisors can
get an easy handle on file backed memory using the persistent memory
interfaces; that's how ClearLinux achieves its speed up today. 
 However, controlling guests under memory pressure requires us to have
a handle on the anonymous memory as well.  I really think a topic
exploring paravirt interfaces for anonymous memory would be really
useful.

James

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ