lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrVCPgbW3_qdgn54EP445WNPNC594PKhRoKQyNLVZBEKAw@mail.gmail.com>
Date:	Thu, 13 Mar 2014 11:03:25 -0700
From:	Andy Lutomirski <luto@...capital.net>
To:	Simo Sorce <ssorce@...hat.com>
Cc:	Vivek Goyal <vgoyal@...hat.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	cgroups@...r.kernel.org,
	Network Development <netdev@...r.kernel.org>,
	"David S. Miller" <davem@...emloft.net>, Tejun Heo <tj@...nel.org>,
	jkaluza@...hat.com, lpoetter@...hat.com, kay@...hat.com
Subject: Re: [PATCH 2/2] net: Implement SO_PEERCGROUP

On Thu, Mar 13, 2014 at 10:57 AM, Simo Sorce <ssorce@...hat.com> wrote:
> On Thu, 2014-03-13 at 10:55 -0700, Andy Lutomirski wrote:
>> On Thu, Mar 13, 2014 at 10:51 AM, Simo Sorce <ssorce@...hat.com> wrote:
>> > On Wed, 2014-03-12 at 19:12 -0700, Andy Lutomirski wrote:
>> >> On Wed, Mar 12, 2014 at 6:43 PM, Simo Sorce <ssorce@...hat.com> wrote:
>> >> > On Wed, 2014-03-12 at 18:21 -0700, Andy Lutomirski wrote:
>> >> >> On Wed, Mar 12, 2014 at 6:17 PM, Simo Sorce <ssorce@...hat.com> wrote:
>> >> >> > On Wed, 2014-03-12 at 14:19 -0700, Andy Lutomirski wrote:
>> >> >> >> On Wed, Mar 12, 2014 at 2:16 PM, Simo Sorce <ssorce@...hat.com> wrote:
>> >> >> >>
>> >> >> >> >
>> >> >> >> > Connection time is all we do and can care about.
>> >> >> >>
>> >> >> >> You have not answered why.
>> >> >> >
>> >> >> > We are going to disclose information to the peer based on policy that
>> >> >> > depends on the cgroup the peer is part of. All we care for is who opened
>> >> >> > the connection, if the peer wants to pass on that information after it
>> >> >> > has obtained it there is nothing we can do, so connection time is all we
>> >> >> > really care about.
>> >> >>
>> >> >> Can you give a realistic example?
>> >> >>
>> >> >> I could say that I'd like to disclose information to processes based
>> >> >> on their rlimits at the time they connected, but I don't think that
>> >> >> would carry much weight.
>> >> >
>> >> > We want to be able to show different user's list from SSSD based on the
>> >> > docker container that is asking for it.
>> >> >
>> >> > This works by having libnsss_sss.so from the containerized application
>> >> > connect to an SSSD daemon running on the host or in another container.
>> >> >
>> >> > The only way to distinguish between containers "from the outside" is to
>> >> > lookup the cgroup of the requesting process. It has a unique container
>> >> > ID, and can therefore be mapped to the appropriate policy that will let
>> >> > us decide which 'user domain' to serve to the container.
>> >> >
>> >>
>> >> I can think of at least three other ways to do this.
>> >>
>> >> 1. Fix Docker to use user namespaces and use the uid of the requesting
>> >> process via SCM_CREDENTIALS.
>> >
>> > This is not practical, I have no control on what UIDs will be used
>> > within a container, and IIRC user namespaces have severe limitations
>> > that may make them unusable in some situations. Forcing the use of user
>> > namespaces on docker to satisfy my use case is not in my power.
>>
>> Except that Docker w/o userns is basically completely insecure unless
>> selinux or apparmor is in use, so this may not matter.
>>
>> >
>> >> 2. Docker is a container system, so use the "container" (aka
>> >> namespace) APIs.  There are probably several clever things that could
>> >> be done with /proc/<pid>/ns.
>> >
>> > pid is racy, if it weren't I would simply go straight
>> > to /proc/<pid>/cgroups ...
>>
>> How about:
>>
>> open("/proc/self/ns/ipc", O_RDONLY);
>> send the result over SCM_RIGHTS?
>
> This needs to work with existing clients, existing clients, don't do
> this.
>

Wait... you want completely unmodified clients in a container to talk
to a service that they don't even realize is outside the container and
for that server to magically behave differently because the container
is there?  And there's no per-container proxy involved?  And every
container is connecting to *the very same socket*?

I just can't imagine this working well regardless if what magic socket
options you add, especially if user namespaces aren't in use.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ