lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Jul 2020 14:19:22 +0100
From:   Lorenz Bauer <lmb@...udflare.com>
To:     Jakub Sitnicki <jakub@...udflare.com>
Cc:     bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
        kernel-team <kernel-team@...udflare.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Marek Majkowski <marek@...udflare.com>
Subject: Re: [PATCH bpf-next v3 04/16] inet: Run SK_LOOKUP BPF program on
 socket lookup

On Thu, 2 Jul 2020 at 13:46, Jakub Sitnicki <jakub@...udflare.com> wrote:
>
> On Thu, Jul 02, 2020 at 12:27 PM CEST, Lorenz Bauer wrote:
> > On Thu, 2 Jul 2020 at 10:24, Jakub Sitnicki <jakub@...udflare.com> wrote:
> >>
> >> Run a BPF program before looking up a listening socket on the receive path.
> >> Program selects a listening socket to yield as result of socket lookup by
> >> calling bpf_sk_assign() helper and returning BPF_REDIRECT (7) code.
> >>
> >> Alternatively, program can also fail the lookup by returning with
> >> BPF_DROP (1), or let the lookup continue as usual with BPF_OK (0) on
> >> return. Other return values are treated the same as BPF_OK.
> >
> > I'd prefer if other values were treated as BPF_DROP, with other semantics
> > unchanged. Otherwise we won't be able to introduce new semantics
> > without potentially breaking user code.
>
> That might be surprising or even risky. If you attach a badly written
> program that say returns a negative value, it will drop all TCP SYNs and
> UDP traffic.

I think if you do that all bets are off anyways. No use in trying to stagger on.
Being stricter here will actually make it easier to for a developer to ensure
that their program is doing the right thing.

My point about future extensions also still stands.

>
> >
> >>
> >> This lets the user match packets with listening sockets freely at the last
> >> possible point on the receive path, where we know that packets are destined
> >> for local delivery after undergoing policing, filtering, and routing.
> >>
> >> With BPF code selecting the socket, directing packets destined to an IP
> >> range or to a port range to a single socket becomes possible.
> >>
> >> In case multiple programs are attached, they are run in series in the order
> >> in which they were attached. The end result gets determined from return
> >> code from each program according to following rules.
> >>
> >>  1. If any program returned BPF_REDIRECT and selected a valid socket, this
> >>     socket will be used as result of the lookup.
> >>  2. If more than one program returned BPF_REDIRECT and selected a socket,
> >>     last selection takes effect.
> >>  3. If any program returned BPF_DROP and none returned BPF_REDIRECT, the
> >>     socket lookup will fail with -ECONNREFUSED.
> >>  4. If no program returned neither BPF_DROP nor BPF_REDIRECT, socket lookup
> >>     continues to htable-based lookup.
> >>
> >> Suggested-by: Marek Majkowski <marek@...udflare.com>
> >> Signed-off-by: Jakub Sitnicki <jakub@...udflare.com>
> >> ---
> >>
> >> Notes:
> >>     v3:
> >>     - Use a static_key to minimize the hook overhead when not used. (Alexei)
> >>     - Adapt for running an array of attached programs. (Alexei)
> >>     - Adapt for optionally skipping reuseport selection. (Martin)
> >>
> >>  include/linux/bpf.h        | 29 ++++++++++++++++++++++++++++
> >>  include/linux/filter.h     | 39 ++++++++++++++++++++++++++++++++++++++
> >>  kernel/bpf/net_namespace.c | 32 ++++++++++++++++++++++++++++++-
> >>  net/core/filter.c          |  2 ++
> >>  net/ipv4/inet_hashtables.c | 31 ++++++++++++++++++++++++++++++
> >>  5 files changed, 132 insertions(+), 1 deletion(-)
> >>
>
> [...]
>
> >> diff --git a/kernel/bpf/net_namespace.c b/kernel/bpf/net_namespace.c
> >> index 090166824ca4..a7768feb3ade 100644
> >> --- a/kernel/bpf/net_namespace.c
> >> +++ b/kernel/bpf/net_namespace.c
> >> @@ -25,6 +25,28 @@ struct bpf_netns_link {
> >>  /* Protects updates to netns_bpf */
> >>  DEFINE_MUTEX(netns_bpf_mutex);
> >>
> >> +static void netns_bpf_attach_type_disable(enum netns_bpf_attach_type type)
> >
> > Nit: maybe netns_bpf_attach_type_dec()? Disable sounds like it happens
> > unconditionally.
>
> attach_type_dec()/_inc() seems a bit cryptic, since it's not the attach
> type we are incrementing/decrementing.
>
> But I was considering _need()/_unneed(), which would follow an existing
> example, if you think that improves things.

SGTM!

>
> >
> >> +{
> >> +       switch (type) {
> >> +       case NETNS_BPF_SK_LOOKUP:
> >> +               static_branch_dec(&bpf_sk_lookup_enabled);
> >> +               break;
> >> +       default:
> >> +               break;
> >> +       }
> >> +}
> >> +
> >> +static void netns_bpf_attach_type_enable(enum netns_bpf_attach_type type)
> >> +{
> >> +       switch (type) {
> >> +       case NETNS_BPF_SK_LOOKUP:
> >> +               static_branch_inc(&bpf_sk_lookup_enabled);
> >> +               break;
> >> +       default:
> >> +               break;
> >> +       }
> >> +}
> >> +
> >>  /* Must be called with netns_bpf_mutex held. */
> >>  static void netns_bpf_run_array_detach(struct net *net,
> >>                                        enum netns_bpf_attach_type type)
> >> @@ -93,6 +115,9 @@ static void bpf_netns_link_release(struct bpf_link *link)
> >>         if (!net)
> >>                 goto out_unlock;
> >>
> >> +       /* Mark attach point as unused */
> >> +       netns_bpf_attach_type_disable(type);
> >> +
> >>         /* Remember link position in case of safe delete */
> >>         idx = link_index(net, type, net_link);
> >>         list_del(&net_link->node);
> >> @@ -416,6 +441,9 @@ static int netns_bpf_link_attach(struct net *net, struct bpf_link *link,
> >>                                         lockdep_is_held(&netns_bpf_mutex));
> >>         bpf_prog_array_free(run_array);
> >>
> >> +       /* Mark attach point as used */
> >> +       netns_bpf_attach_type_enable(type);
> >> +
> >>  out_unlock:
> >>         mutex_unlock(&netns_bpf_mutex);
> >>         return err;
> >> @@ -491,8 +519,10 @@ static void __net_exit netns_bpf_pernet_pre_exit(struct net *net)
> >>         mutex_lock(&netns_bpf_mutex);
> >>         for (type = 0; type < MAX_NETNS_BPF_ATTACH_TYPE; type++) {
> >>                 netns_bpf_run_array_detach(net, type);
> >> -               list_for_each_entry(net_link, &net->bpf.links[type], node)
> >> +               list_for_each_entry(net_link, &net->bpf.links[type], node) {
> >>                         net_link->net = NULL; /* auto-detach link */
> >> +                       netns_bpf_attach_type_disable(type);
> >> +               }
> >>                 if (net->bpf.progs[type])
> >>                         bpf_prog_put(net->bpf.progs[type]);
> >>         }
> >> diff --git a/net/core/filter.c b/net/core/filter.c
>
> [...]



-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ