lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 29 Nov 2020 17:05:59 -0800
From:   Andrey Ignatov <rdna@...com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
CC:     Stanislav Fomichev <sdf@...gle.com>,
        Network Development <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [PATCH bpf-next 2/3] bpf: allow bpf_{s,g}etsockopt from cgroup
 bind{4,6} hooks

Alexei Starovoitov <alexei.starovoitov@...il.com> [Tue, 2020-11-17 20:05 -0800]:
> On Tue, Nov 17, 2020 at 4:17 PM Stanislav Fomichev <sdf@...gle.com> wrote:
> >
> > I have to now lock/unlock socket for the bind hook execution.
> > That shouldn't cause any overhead because the socket is unbound
> > and shouldn't receive any traffic.
> >
> > Signed-off-by: Stanislav Fomichev <sdf@...gle.com>
> > ---
> >  include/linux/bpf-cgroup.h | 12 ++++++------
> >  net/core/filter.c          |  4 ++++
> >  net/ipv4/af_inet.c         |  2 +-
> >  net/ipv6/af_inet6.c        |  2 +-
> >  4 files changed, 12 insertions(+), 8 deletions(-)
> >
> > diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
> > index ed71bd1a0825..72e69a0e1e8c 100644
> > --- a/include/linux/bpf-cgroup.h
> > +++ b/include/linux/bpf-cgroup.h
> > @@ -246,11 +246,11 @@ int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
> >         __ret;                                                                 \
> >  })
> >
> > -#define BPF_CGROUP_RUN_PROG_INET4_BIND(sk, uaddr)                             \
> > -       BPF_CGROUP_RUN_SA_PROG(sk, uaddr, BPF_CGROUP_INET4_BIND)
> > +#define BPF_CGROUP_RUN_PROG_INET4_BIND_LOCK(sk, uaddr)                        \
> > +       BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, BPF_CGROUP_INET4_BIND, NULL)
> >
> > -#define BPF_CGROUP_RUN_PROG_INET6_BIND(sk, uaddr)                             \
> > -       BPF_CGROUP_RUN_SA_PROG(sk, uaddr, BPF_CGROUP_INET6_BIND)
> > +#define BPF_CGROUP_RUN_PROG_INET6_BIND_LOCK(sk, uaddr)                        \
> > +       BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, BPF_CGROUP_INET6_BIND, NULL)
> >
> >  #define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (cgroup_bpf_enabled && \
> >                                             sk->sk_prot->pre_connect)
> > @@ -434,8 +434,8 @@ static inline int bpf_percpu_cgroup_storage_update(struct bpf_map *map,
> >  #define BPF_CGROUP_RUN_PROG_INET_EGRESS(sk,skb) ({ 0; })
> >  #define BPF_CGROUP_RUN_PROG_INET_SOCK(sk) ({ 0; })
> >  #define BPF_CGROUP_RUN_PROG_INET_SOCK_RELEASE(sk) ({ 0; })
> > -#define BPF_CGROUP_RUN_PROG_INET4_BIND(sk, uaddr) ({ 0; })
> > -#define BPF_CGROUP_RUN_PROG_INET6_BIND(sk, uaddr) ({ 0; })
> > +#define BPF_CGROUP_RUN_PROG_INET4_BIND_LOCK(sk, uaddr) ({ 0; })
> > +#define BPF_CGROUP_RUN_PROG_INET6_BIND_LOCK(sk, uaddr) ({ 0; })
> >  #define BPF_CGROUP_RUN_PROG_INET4_POST_BIND(sk) ({ 0; })
> >  #define BPF_CGROUP_RUN_PROG_INET6_POST_BIND(sk) ({ 0; })
> >  #define BPF_CGROUP_RUN_PROG_INET4_CONNECT(sk, uaddr) ({ 0; })
> > diff --git a/net/core/filter.c b/net/core/filter.c
> > index 2ca5eecebacf..21d91dcf0260 100644
> > --- a/net/core/filter.c
> > +++ b/net/core/filter.c
> > @@ -6995,6 +6995,8 @@ sock_addr_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >                 return &bpf_sk_storage_delete_proto;
> >         case BPF_FUNC_setsockopt:
> >                 switch (prog->expected_attach_type) {
> > +               case BPF_CGROUP_INET4_BIND:
> > +               case BPF_CGROUP_INET6_BIND:
> >                 case BPF_CGROUP_INET4_CONNECT:
> >                 case BPF_CGROUP_INET6_CONNECT:
> >                         return &bpf_sock_addr_setsockopt_proto;
> > @@ -7003,6 +7005,8 @@ sock_addr_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >                 }
> >         case BPF_FUNC_getsockopt:
> >                 switch (prog->expected_attach_type) {
> > +               case BPF_CGROUP_INET4_BIND:
> > +               case BPF_CGROUP_INET6_BIND:
> >                 case BPF_CGROUP_INET4_CONNECT:
> >                 case BPF_CGROUP_INET6_CONNECT:
> >                         return &bpf_sock_addr_getsockopt_proto;
> > diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
> > index b7260c8cef2e..b94fa8eb831b 100644
> > --- a/net/ipv4/af_inet.c
> > +++ b/net/ipv4/af_inet.c
> > @@ -450,7 +450,7 @@ int inet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
> >         /* BPF prog is run before any checks are done so that if the prog
> >          * changes context in a wrong way it will be caught.
> >          */
> > -       err = BPF_CGROUP_RUN_PROG_INET4_BIND(sk, uaddr);
> > +       err = BPF_CGROUP_RUN_PROG_INET4_BIND_LOCK(sk, uaddr);
> 
> I think it is ok, but I need to go through the locking paths more.
> Andrey,
> please take a look as well.

Sorry for delay, I was offline for the last two weeks.

>From the correctness perspective it looks fine to me.

>From the performance perspective I can think of one relevant scenario.
Quite common use-case in applications is to use bind(2) not before
listen(2) but before connect(2) for client sockets so that connection
can be set up from specific source IP and, optionally, port.

Binding to both IP and port case is not interesting since it's already
slow due to get_port().

But some applications do care about connection setup performance and at
the same time need to set source IP only (no port). In this case they
use IP_BIND_ADDRESS_NO_PORT socket option, what makes bind(2) fast
(we've discussed it with Stanislav earlier in [0]).

I can imagine some pathological case when an application sets up tons of
connections with bind(2) before connect(2) for sockets with
IP_BIND_ADDRESS_NO_PORT enabled (that by itself requires setsockopt(2)
though, i.e. socket lock/unlock) and that another lock/unlock to run
bind hook may add some overhead. Though I do not know how critical that
overhead may be and whether it's worth to benchmark or not (maybe too
much paranoia).

[0] https://lore.kernel.org/bpf/20200505182010.GB55644@rdna-mbp/

> >         if (err)
> >                 return err;
> >
> > diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
> > index e648fbebb167..a7e3d170af51 100644
> > --- a/net/ipv6/af_inet6.c
> > +++ b/net/ipv6/af_inet6.c
> > @@ -451,7 +451,7 @@ int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
> >         /* BPF prog is run before any checks are done so that if the prog
> >          * changes context in a wrong way it will be caught.
> >          */
> > -       err = BPF_CGROUP_RUN_PROG_INET6_BIND(sk, uaddr);
> > +       err = BPF_CGROUP_RUN_PROG_INET6_BIND_LOCK(sk, uaddr);
> >         if (err)
> >                 return err;
> >
> > --
> > 2.29.2.299.gdc1121823c-goog
> >

-- 
Andrey Ignatov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ