[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1193933548.6271.78.camel@localhost>
Date: Thu, 01 Nov 2007 09:12:28 -0700
From: Dave Hansen <haveblue@...ibm.com>
To: Ulrich Drepper <drepper@...hat.com>
Cc: Pavel Emelyanov <xemul@...nvz.org>, Ingo Molnar <mingo@...e.hu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [patch] PID namespace design bug, workaround
On Thu, 2007-11-01 at 07:56 -0700, Ulrich Drepper wrote:
> Pavel Emelyanov wrote:
> > With this set we'll be able to mark pid namespaces as EXPERIMENTAL
> > or even BROKEN, so nobody will be able to crate them. So can we, please,
> > keep things as they are for now - the appropriate fix will be ready
> > soon.
>
> You sound far too optimistic for my taste. I probably haven't seen the
> proposal you have in mind but everything else I have seen simply doesn't
> work without breaking something.
Yeah, we definitely realize that this inhibits things that were
perfectly fine before.
As Eric mentioned in his reply to your message last year, the primary
goal here is isolation. We'd eventually like to be able to pick a
container up and move it to another system. That's going to be awfully
hard if the container is sharing a resource with a part of the system
which is not moving.
Pid namespaces (along with the others) give us the isolation to keep
these interactions from happening except in a controlled manner,
breaking the ties that might bind it to one particular system.
Think of how many user-visible apis deal with files and filenames.
However, there can certainly be files that are unavailable to certain
processes based on their membership in a particular filesystem
namespaces. In fact, we use chroot() to try and _make_ certain files
unavailable.
-- Dave
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists