lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Mar 2020 12:32:24 -0700
From:   John Fastabend <john.fastabend@...il.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Lorenz Bauer <lmb@...udflare.com>
Cc:     Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        kernel-team <kernel-team@...udflare.com>,
        Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH 0/5] Return fds from privileged sockhash/sockmap lookup

Alexei Starovoitov wrote:
> On Thu, Mar 12, 2020 at 09:16:34AM +0000, Lorenz Bauer wrote:
> > On Thu, 12 Mar 2020 at 01:58, Alexei Starovoitov
> > <alexei.starovoitov@...il.com> wrote:
> > >
> > > we do store the socket FD into a sockmap, but returning new FD to that socket
> > > feels weird. The user space suppose to hold those sockets. If it was bpf prog
> > > that stored a socket then what does user space want to do with that foreign
> > > socket? It likely belongs to some other process. Stealing it from other process
> > > doesn't feel right.
> > 
> > For our BPF socket dispatch control plane this is true by design: all sockets
> > belong to another process. The privileged user space is the steward of these,
> > and needs to make sure traffic is steered to them. I agree that stealing them is
> > weird, but after all this is CAP_NET_ADMIN only. pidfd_getfd allows you to
> > really steal an fd from another process, so that cat is out of the bag ;)
> 
> but there it goes through ptrace checks and lsm hoooks, whereas here similar
> security model cannot be enforced. bpf prog can put any socket into sockmap and
> from bpf_lookup_elem side there is no way to figure out the owner task of the
> socket to do ptrace checks. Just doing it all under CAP_NET_ADMIN is not a
> great security answer.
> 
> > Marek wrote a PoC control plane: https://github.com/majek/inet-tool
> > It is a CLI tool and not a service, so it can't hold on to any sockets.
> > 
> > You can argue that we should turn it into a service, but that leads to another
> > problem: there is no way of recovering these fds if the service crashes for
> > some reason. The only solution would be to restart all services, which in
> > our set up is the same as rebooting a machine really.
> > 
> > > Sounds like the use case is to take sockets one by one from one map, allocate
> > > another map and store them there? The whole process has plenty of races.
> > 
> > It doesn't have to race. Our user space can do the appropriate locking to ensure
> > that operations are atomic wrt. dispatching to sockets:
> > 
> > - lock
> > - read sockets from sockmap
> > - write sockets into new sockmap
> 
> but bpf side may still need to insert them into old.
> you gonna solve it with a flag for the prog to stop doing its job?
> Or the prog will know that it needs to put sockets into second map now?
> It's really the same problem as with classic so_reuseport
> which was solved with BPF_MAP_TYPE_REUSEPORT_SOCKARRAY.
> 
> > > I think it's better to tackle the problem from resize perspective. imo making it
> > > something like sk_local_storage (which is already resizable pseudo map of
> > > sockets) is a better way forward.
> > 
> > Resizing is only one aspect. We may also need to shuffle services around,
> > think "defragmentation", and I think there will be other cases as we gain more
> > experience with the control plane. Being able to recover fds from the sockmap
> > will make it more resilient. Adding a special API for every one of these cases
> > seems cumbersome.
> 
> I think sockmap needs a redesign. Consider that today all sockets can be in any
> number of sk_local_storage pseudo maps. They are 'defragmented' and resizable.
> I think plugging socket redirect to use sk_local_storage-like infra is the
> answer.

socket redirect today can use any number of maps and redirect to any sock
in any map. There is no restriction on only being able to redirect to socks
in the same map. Further, the same sock can be in the multiple maps or even
the same map in multiple slots. I think its fairly similar to sk local
storage in this way.

The restriction that the maps can not grow/shrink is perhaps limiting a
bit. I can see how resizing might be useful. In my original load balancer
case a single application owned all the socks so there was no need to
ever pull them back out of the map. We "knew" where they were. I think
resize ops could be added without to much redesign. Or a CREATE flag could
be used to add it as a new entry if needed. At some point I guess someone
will request it as a feature for Cilium for example. OTOH I'm not sure
off-hand how to use a dynamically sized table for load balancing. I
should know the size because I want to say something about the hash
distribution and if the size is changing do I still know this? I really
haven't considered it much.

As an aside redirect helper could work with anything of sock type not just
socks from maps. Now that we have BTF infra to do it we could just type
check that we have a sock and do the redirect regardless of if the sock
is in a map or not. The map really provides two functions, first a
way to attach programs to the socks and second a stable array to hash
over if needed.

Rather than expose the fd's to user space would a map copy api be
useful? I could imagine some useful cases where copy might be used 

 map_copy(map *A, map *B, map_key *key)

would need to sort out what to do with key/value size changes. But
I can imagine for upgrades this might be useful.

Another option I've been considering the need for a garbage collection
thread trigger at regular intervals. This BPF program could do the
copy from map to map in kernel space never exposing fds out of kernel

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ