[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1ve04vkov.fsf@frodo.ebiederm.org>
Date: Thu, 19 Jun 2008 20:39:44 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Cedric Le Goater <clg@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Containers <containers@...ts.osdl.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Pavel Emelianov <xemul@...nvz.org>,
Serge Hallyn <serue@...ibm.com>
Subject: Re: [patch -mm 0/4] mqueue namespace
ebiederm@...ssion.com (Eric W. Biederman) writes:
> One way to fix that is to add a hidden directory to the mnt namespace.
> Where magic in kernel filesystems can be mounted. Only visible
> with a magic openat flag. Then:
>
> fd = openat(AT_FDKERN, ".", O_DIRECTORY)
> fchdir(fd);
> umount("./mqueue", MNT_DETACH);
> mount(("none", "./mqueue", "mqueue", 0, NULL);
>
> Would unshare the mqueue namespace.
>
> Implemented for plan9 this would solve a problem of how do you get
> access to all of it's special filesystems. As only bind mounts
> and remote filesystem mounts are available. For linux thinking about
> it might shake the conversation up a bit.
Thinking about this some more. What is especially attractive if we do
all namespaces this way is that it solves two lurking problems.
1) How do you keep a namespace around without a process in it.
2) How do you enter a container.
If we could land the namespaces in the filesystem we could easily
persist them past the point where a process is present in one if we so
choose.
Entering a container would be a matter of replacing your current
namespaces mounts with namespace mounts take from the filesystem.
I expect performance would degrade in practice, but it is tempting
to implement it and run a benchmark and see if we can measure anything.
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists