[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrXWzja6W6y=p9MrtynGZMsrD5KQu0KBK6Bs1dQxnesv2A@mail.gmail.com>
Date: Thu, 13 Mar 2014 10:55:16 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Simo Sorce <ssorce@...hat.com>
Cc: Vivek Goyal <vgoyal@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
cgroups@...r.kernel.org,
Network Development <netdev@...r.kernel.org>,
"David S. Miller" <davem@...emloft.net>, Tejun Heo <tj@...nel.org>,
jkaluza@...hat.com, lpoetter@...hat.com, kay@...hat.com
Subject: Re: [PATCH 2/2] net: Implement SO_PEERCGROUP
On Thu, Mar 13, 2014 at 10:51 AM, Simo Sorce <ssorce@...hat.com> wrote:
> On Wed, 2014-03-12 at 19:12 -0700, Andy Lutomirski wrote:
>> On Wed, Mar 12, 2014 at 6:43 PM, Simo Sorce <ssorce@...hat.com> wrote:
>> > On Wed, 2014-03-12 at 18:21 -0700, Andy Lutomirski wrote:
>> >> On Wed, Mar 12, 2014 at 6:17 PM, Simo Sorce <ssorce@...hat.com> wrote:
>> >> > On Wed, 2014-03-12 at 14:19 -0700, Andy Lutomirski wrote:
>> >> >> On Wed, Mar 12, 2014 at 2:16 PM, Simo Sorce <ssorce@...hat.com> wrote:
>> >> >>
>> >> >> >
>> >> >> > Connection time is all we do and can care about.
>> >> >>
>> >> >> You have not answered why.
>> >> >
>> >> > We are going to disclose information to the peer based on policy that
>> >> > depends on the cgroup the peer is part of. All we care for is who opened
>> >> > the connection, if the peer wants to pass on that information after it
>> >> > has obtained it there is nothing we can do, so connection time is all we
>> >> > really care about.
>> >>
>> >> Can you give a realistic example?
>> >>
>> >> I could say that I'd like to disclose information to processes based
>> >> on their rlimits at the time they connected, but I don't think that
>> >> would carry much weight.
>> >
>> > We want to be able to show different user's list from SSSD based on the
>> > docker container that is asking for it.
>> >
>> > This works by having libnsss_sss.so from the containerized application
>> > connect to an SSSD daemon running on the host or in another container.
>> >
>> > The only way to distinguish between containers "from the outside" is to
>> > lookup the cgroup of the requesting process. It has a unique container
>> > ID, and can therefore be mapped to the appropriate policy that will let
>> > us decide which 'user domain' to serve to the container.
>> >
>>
>> I can think of at least three other ways to do this.
>>
>> 1. Fix Docker to use user namespaces and use the uid of the requesting
>> process via SCM_CREDENTIALS.
>
> This is not practical, I have no control on what UIDs will be used
> within a container, and IIRC user namespaces have severe limitations
> that may make them unusable in some situations. Forcing the use of user
> namespaces on docker to satisfy my use case is not in my power.
Except that Docker w/o userns is basically completely insecure unless
selinux or apparmor is in use, so this may not matter.
>
>> 2. Docker is a container system, so use the "container" (aka
>> namespace) APIs. There are probably several clever things that could
>> be done with /proc/<pid>/ns.
>
> pid is racy, if it weren't I would simply go straight
> to /proc/<pid>/cgroups ...
How about:
open("/proc/self/ns/ipc", O_RDONLY);
send the result over SCM_RIGHTS?
>
>> 3. Given that Docker uses network namespaces, I assume that the socket
>> connection between the two sssd instances either comes from Docker
>> itself or uses socket inodes. In either case, the same mechanism
>> should be usable for authentication.
>
> It is a unix socket, ie bind mounted on the container filesystem, not
> sure network namespaces really come into the picture, and I do not know
> of a racefree way of knowing what is the namespace of the peer at
> connect time.
> Is there a SO_PEER_NAMESPACE option ?
So give each container its own unix socket. Problem solved, no?
--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists