lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <439BF796-53D3-48C9-8578-A0733DDE8001@intel.com>
Date:	Sat, 23 Jan 2016 23:41:22 +0000
From:	"Nakajima, Jun" <jun.nakajima@...el.com>
To:	Rik van Riel <riel@...hat.com>
CC:	"lsf-pc@...ts.linuxfoundation.org" <lsf-pc@...ts.linuxfoundation.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	Linux kernel Mailing List <linux-kernel@...r.kernel.org>,
	KVM list <kvm@...r.kernel.org>
Subject: Re: [LSF/MM TOPIC] VM containers


> On Jan 22, 2016, at 7:56 AM, Rik van Riel <riel@...hat.com> wrote:
> 
> Hi,
> 
> I am trying to gauge interest in discussing VM containers at the LSF/MM
> summit this year. Projects like ClearLinux, Qubes, and others are all
> trying to use virtual machines as better isolated containers.
> 
> That changes some of the goals the memory management subsystem has,
> from "use all the resources effectively" to "use as few resources as
> necessary, in case the host needs the memory for something else".
> 
> These VMs could be as small as running just one application, so this
> goes a little further than simply trying to squeeze more virtual
> machines into a system with frontswap and clean cache.

I would be very interested in discussing this topic, and I agree that "a topic exploring paravirt interfaces for anonymous memory would be really useful" (as James pointed out).

Beyond memory consumption, I would be interested whether we can harden the kernel by the paravirt interfaces for memory protection in VMs (if any). For example, the hypervisor could write-protect part of the page tables or kernel data structures in VMs, and does it help?

> 
> Single-application VM sandboxes could also get their data differently,
> using (partial) host filesystem passthrough, instead of a virtual
> block device. This may change the relative utility of caching data
> inside the guest page cache, versus freeing up that memory and
> allowing the host to use it to cache things.
> 
> Are people interested in discussing this at LSF/MM, or is it better
> saved for a different forum?

In my view, it’s worth discussing the details focusing on memory and storage. It would be good if we can discuss other areas in a different forum, such as CPU scheduling and network. For example, the cost of context switching becomes higher as applications run in more (small) VMs because that tends to incur more VM exits. 

---
Jun
Intel Open Source Technology Center

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ