[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081014185320.GA24908@us.ibm.com>
Date: Tue, 14 Oct 2008 13:53:20 -0500
From: "Serge E. Hallyn" <serue@...ibm.com>
To: Tejun Heo <tj@...nel.org>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
Greg KH <greg@...ah.com>, Al Viro <viro@...IV.linux.org.uk>,
Benjamin Thery <benjamin.thery@...l.net>,
linux-kernel@...r.kernel.org, Al Viro <viro@....linux.org.uk>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: sysfs: tagged directories not merged completely yet
Quoting Tejun Heo (tj@...nel.org):
> >> Can somebody hammer the big picture regarding namespaces into my
> >> small head?
> >
> > 100,000 foot view. A namespace introduces a scope so multiple
> > objects can have the same name. Like network devices.
> >
> > 10,000 foot view. The network namespace looks to user space
> > as if the kernel has multiple independent network stacks.
> >
> > 1000 foot view. I have two network devices named lo, and sysfs
> > does not currently have a place for me to put them.
> >
> > Leakage and being able to fool an application that it has the entire
> > kernel to itself are not concerns. The goal is simply to get the
> > entire object name to object translation boundary and the namespace
> > work is done. We have largely achieved, and the code to do
> > so once complete is reasonable enough that it should be no
> > worse than dealing with any other kernel bug.
>
> Yes, I'm aware of the goals. What I'm curious about is the consensus
> regarding network namespace and all its implications. It adds a lot
> of complexities over a lot of places. e.g. following the sysfs code
> becomes quite a bit more difficult after the namespace changes (maybe
> it's just me but still). So, I was asking whether people generally
> agree that having the namespace thing is worth the added complexities.
>
> I think it serves pretty small group of users. Hosting service
I don't think that's true.
Let's say i want to run debootstrap and set up a minimal image
to run postfix. Now if I want to run that on my laptop as its own
minimal separate machine, I need to run qemu or kvm. That's huge.
Once we finally get network namespaces(-sysfs) finished, I can set up a
10-line config file, download and install
https://sourceforge.net/projects/lxc/, run
lxc-execute -n postfix-cont /bin/bash
and voila, I have postfix running as though on a separate machine,
but with none of the kvm/qemu overhead. Which means that instead
of being able to do one at a time, i can do... hundreds? So I
think this is something everyone will find useful - but of course
I *am* biased :)
> providers and people trying to migrate processes from one machine to
> another, both of which can be served pretty well with virtualization.
> It does have higher overhead both processing power and memory wise but
> IIUC the former is being actively worked on w/ new processor features
> like nested paging tables and all and memory is really cheap these
> days, so I'm a bit skeptical how much this is needed and how much we
> should pay for it.
>
> Another venue to explore is whether the partial view of proc and sysfs
> can be implemented in less pervasive way. Implementing it via FUSE
> might not be easier per-se but I think it would be better to do it
Again fuse doesn't address the *core* issue (sysfs needing a way to
create files for multiple devicenames with same name). But I believe
Benjamin was looking into a minimal patch to fix that. Benjamin,
have you gotten anywhere with that?
> that way if we can instead of adding complexities to both proc and
> sysfs.
>
> One last thing that came to mind is, how would uevents be handled?
> ie. what happens if a network card which is presented as ethN in the
> namespace goes away? How does the system deal with it?
>
> Thanks.
>
> --
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists