[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aec7e5c30702190116j26efcba3oe5223584f99ac25a@mail.gmail.com>
Date: Mon, 19 Feb 2007 18:16:42 +0900
From: "Magnus Damm" <magnus.damm@...il.com>
To: "Andrew Morton" <akpm@...ux-foundation.org>
Cc: "Balbir Singh" <balbir@...ibm.com>, linux-kernel@...r.kernel.org,
vatsa@...ibm.com, ckrm-tech@...ts.sourceforge.net, xemul@...ru,
linux-mm@...ck.org, menage@...gle.com, svaidy@...ux.vnet.ibm.com,
devel@...nvz.org
Subject: Re: [RFC][PATCH][0/4] Memory controller (RSS Control)
On 2/19/07, Andrew Morton <akpm@...ux-foundation.org> wrote:
> On Mon, 19 Feb 2007 12:20:19 +0530 Balbir Singh <balbir@...ibm.com> wrote:
>
> > This patch applies on top of Paul Menage's container patches (V7) posted at
> >
> > http://lkml.org/lkml/2007/2/12/88
> >
> > It implements a controller within the containers framework for limiting
> > memory usage (RSS usage).
> The key part of this patchset is the reclaim algorithm:
>
> Alas, I fear this might have quite bad worst-case behaviour. One small
> container which is under constant memory pressure will churn the
> system-wide LRUs like mad, and will consume rather a lot of system time.
> So it's a point at which container A can deleteriously affect things which
> are running in other containers, which is exactly what we're supposed to
> not do.
Nice with a simple memory controller. The downside seems to be that it
doesn't scale very well when it comes to reclaim, but maybe that just
comes with being simple. Step by step, and maybe this is a good first
step?
Ideally I'd like to see unmapped pages handled on a per-container LRU
with a fallback to the system-wide LRUs. Shared/mapped pages could be
handled using PTE ageing/unmapping instead of page ageing, but that
may consume too much resources to be practical.
/ magnus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists