lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 Jan 2016 12:11:21 -0500
From:	Johannes Weiner <hannes@...xchg.org>
To:	Rik van Riel <riel@...hat.com>
Cc:	lsf-pc@...ts.linuxfoundation.org,
	Linux Memory Management List <linux-mm@...ck.org>,
	Linux kernel Mailing List <linux-kernel@...r.kernel.org>,
	KVM list <kvm@...r.kernel.org>
Subject: Re: [LSF/MM TOPIC] VM containers

Hi,

On Fri, Jan 22, 2016 at 10:56:15AM -0500, Rik van Riel wrote:
> I am trying to gauge interest in discussing VM containers at the LSF/MM
> summit this year. Projects like ClearLinux, Qubes, and others are all
> trying to use virtual machines as better isolated containers.
> 
> That changes some of the goals the memory management subsystem has,
> from "use all the resources effectively" to "use as few resources as
> necessary, in case the host needs the memory for something else".

I would be very interested in discussing this topic, because I think
the issue is more generic than these VM applications. We are facing
the same issues with regular containers, where aggressive caching is
counteracting the desire to cut down workloads to their bare minimum
in order to pack them as tightly as possible.

With per-cgroup LRUs and thrash detection, we have infrastructure in
place that could allow us to accomplish this. Right now we only enter
reclaim once memory runs out, but we could add an allocation mode that
would prefer to always reclaim from the local LRU before increasing
the memory footprint, and only expand once we detect thrashing in the
page cache. That would keep the workloads neatly trimmed at all times.

For virtualized environments, the thrashing information would be
communicated slightly differently to the page allocator and/or the
host, but otherwise the fundamental principles should be the same.

We'd have to figure out how to balance the aggressiveness there and
how to describe this to the user, as I can imagine that users would
want to tune this based on a tolerance for the degree of thrashing: if
pages are used every M ms, keep them cached; if pages are used every N
ms, freeing up the memory and refetching them from disk is better etc.

And we don't have thrash detection in secondary slab caches (yet).

> Are people interested in discussing this at LSF/MM, or is it better
> saved for a different forum?

If more people are interested, I think that could be a great topic.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ