[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1158776099.8574.89.camel@galaxy.corp.google.com>
Date: Wed, 20 Sep 2006 11:14:59 -0700
From: Rohit Seth <rohitseth@...gle.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Christoph Lameter <clameter@....com>,
Nick Piggin <nickpiggin@...oo.com.au>,
CKRM-Tech <ckrm-tech@...ts.sourceforge.net>, devel@...nvz.org,
linux-kernel <linux-kernel@...r.kernel.org>,
Linux Memory Management <linux-mm@...ck.org>
Subject: Re: [patch00/05]: Containers(V2)- Introduction
On Wed, 2006-09-20 at 20:06 +0200, Peter Zijlstra wrote:
> On Wed, 2006-09-20 at 10:52 -0700, Christoph Lameter wrote:
> > On Wed, 20 Sep 2006, Rohit Seth wrote:
> >
> > > Right now the memory handler in this container subsystem is written in
> > > such a way that when existing kernel reclaimer kicks in, it will first
> > > operate on those (container with pages over the limit) pages first. But
> > > in general I like the notion of containerizing the whole reclaim code.
> >
> > Which comes naturally with cpusets.
>
> How are shared mappings dealt with, are pages charged to the set that
> first faults them in?
>
For anonymous pages (simpler case), they get charged to the faulting
task's container.
For filesystem pages (could be shared across tasks running different
containers): Every time a new file mapping is created, it is bound to a
container of the process creating that mapping. All subsequent pages
belonging to this mapping will belong to this container, irrespective of
different tasks running in different containers accessing these pages.
Currently, I've not implemented a mechanism to allow a file to be
specifically moved into or out of container. But when that gets
implemented then all pages belonging to a mapping will also move out of
container (or into a new container).
-rohit
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists