lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 30 Nov 2007 23:39:37 -0800
From:	"Paul Menage" <menage@...gle.com>
To:	"Nick Piggin" <nickpiggin@...oo.com.au>
Cc:	balbir@...ux.vnet.ibm.com,
	"Linux Memory Management List" <linux-mm@...ck.org>,
	"Andrew Morton" <akpm@...ux-foundation.org>,
	"linux kernel mailing list" <linux-kernel@...r.kernel.org>,
	"Peter Zijlstra" <a.p.zijlstra@...llo.nl>,
	"Hugh Dickins" <hugh@...itas.com>,
	"Lee Schermerhorn" <Lee.Schermerhorn@...com>,
	"KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com>,
	"Pavel Emelianov" <xemul@...ru>,
	"YAMAMOTO Takashi" <yamamoto@...inux.co.jp>,
	"Rik van Riel" <riel@...hat.com>,
	"Christoph Lameter" <clameter@....com>,
	"Martin J. Bligh" <mbligh@...gle.com>,
	"Andy Whitcroft" <andyw@...ibm.com>,
	"Srivatsa Vaddagiri" <vatsa@...ibm.com>
Subject: Re: What can we do to get ready for memory controller merge in 2.6.25

On Nov 29, 2007 6:11 PM, Nick Piggin <nickpiggin@...oo.com.au> wrote:
> And also some
> results or even anecdotes of where this is going to be used would be
> interesting...

We want to be able to run multiple isolated jobs on the same machine.
So being able to limit how much memory each job can consume, in terms
of anonymous memory and page cache, are useful. I've not had much time
to look at the patches in great detail, but they seem to provide a
sensible way to assign and enforce static limits on a bunch of jobs.

Some of our requirements are a bit beyond this, though:

In our experience, users are not good at figuring out how much memory
they really need. In general they tend to massively over-estimate
their requirements. So we want some way to determine how much of its
allocated memory a job is actively using, and how much could be thrown
away or swapped out without bothering the job too much.

Of course, the definition of "actve use" is tricky - one possibility
that we're looking at is "has been accessed within the last N
seconds", where N can be configured appropriately for different jobs
depending on the job's latency requirements. Active use should also be
reported for pages that can't be easily freed quickly, e.g. mlocked or
dirty pages, or anon pages on a swapless system. Inactive pages should
be easily freeable, and be the first ones to go in the event of memory
pressure. (From a scheduling point of view we can treat them as free
memory, and schedule more jobs on the machine)

The existing active/inactive distinction doesn't really capture this,
since it's relative rather than absolute.

We want to be able to overcommit a machine, so the sums of the cgroup
memory limits can add up to more than the total machine memory. So we
need control over what happens when there's global memory pressure,
and a way to ensure that the low-latency jobs don't get bogged down in
reclaim (or OOM) due to the activity of batch jobs.

Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ