[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64N.0611010016350.27467@attu4.cs.washington.edu>
Date: Wed, 1 Nov 2006 00:35:04 -0800 (PST)
From: David Rientjes <rientjes@...washington.edu>
To: Pavel Emelianov <xemul@...nvz.org>
cc: balbir@...ibm.com, vatsa@...ibm.com, dev@...nvz.org,
sekharan@...ibm.com, ckrm-tech@...ts.sourceforge.net,
haveblue@...ibm.com, linux-kernel@...r.kernel.org, pj@....com,
matthltc@...ibm.com, dipankar@...ibm.com, rohitseth@...gle.com,
menage@...gle.com, linux-mm@...ck.org
Subject: Re: [ckrm-tech] RFC: Memory Controller
On Wed, 1 Nov 2006, Pavel Emelianov wrote:
> beancounters may be implemented above any (or nearly any) userspace
> interface, no questions. But we're trying to come to agreement here,
> so I just say my point of view.
>
> I don't mind having file system based interface, I just believe that
> configfs is not so good for it. I've already answered that having
> our own filesystem for it sounds better than having configfs.
>
> Maybe we can summarize what we have come to?
>
I've seen nothing but praise for Paul Menage's suggestion of implementing
a single-level containers abstraction for processes and attaching
these to various resource controller (disk, network, memory, cpu) nodes.
The question of whether to use configfs or not is really at the fore-front
of that discussion because making any progress in implementation is
difficult without first deciding upon it, and the containers abstraction
patchset uses configfs as its interface.
The original objection against configfs was against the lifetime of the
resource controller. But this is actually a two part question since there
are two interfaces: one for the containers, one for the controllers. At
present it seems like the only discussion taking place is that of the
container so this objection can wait. After boot, there are one of two
options:
- require the user to mount the configfs filesystem with a single
system-wide container as default
i. include all processes in that container by default
ii. include no processes in that container, force the user to add them
- create the entire container abstraction upon boot and attach all
processes to it in a manner similar to procfs
[ In both scenarios, kernel behavior is unchanged if no resource
controller node is attached to any container as if the container(s)
didn't exist. ]
Another objection against configfs was the fact that you must define
CONFIG_CONFIGFS_FS to use CONFIG_CONTAINERS. This objection does not make
much sense since it seems like we are falling the direction of abandoning
the syscall approach here and looking toward an fs approach in the first
place. So CONFIG_CONTAINERS will need to include its own lightweight
filesystem if we cannot use CONFIG_CONFIGFS_FS, but it seems redundant
since this is what configfs is for: a configurable filesystem to interface
to the kernel. We definitely do not want two or more interfaces to
_containers_ so we are reimplementing an already existing infrastructure.
The criticism that users can create containers and then not use them
shouldn't be an issue if it is carefully implemented. In fact, I proposed
that all processes are initially attached to a single system-wide
container at boot regardless if you've loaded any controllers or not just
like how UMA machines work with node 0 for system-wide memory. We should
incur no overhead for having empty or _full_ containers if we haven't
loaded controllers or have configured them properly to include the right
containers.
So if we re-read Paul Menage's containers abstraction away from cpusets
patchset that uses configfs, we can see that we are almost there with the
exception of making it a single-layer "hierarchy" as he has already
proposed. Resource controller "nodes" that these containers can be
attached to are a separate issue at this point and shouldn't be confused.
David
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists