lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191016103501.GB21367@pc-63.home>
Date:   Wed, 16 Oct 2019 12:35:01 +0200
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Toke Høiland-Jørgensen <toke@...hat.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        Marek Majkowski <marek@...udflare.com>,
        Lorenz Bauer <lmb@...udflare.com>,
        Alan Maguire <alan.maguire@...cle.com>,
        David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
        bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF
 programs after each other

On Wed, Oct 16, 2019 at 10:27:12AM +0200, Jesper Dangaard Brouer wrote:
> On Tue, 15 Oct 2019 19:28:51 -0700
> Alexei Starovoitov <alexei.starovoitov@...il.com> wrote:
> > On Mon, Oct 14, 2019 at 02:35:45PM +0200, Toke Høiland-Jørgensen wrote:
> > > Alexei Starovoitov <alexei.starovoitov@...il.com> writes:
> > > > On Wed, Oct 09, 2019 at 10:03:43AM +0200, Toke Høiland-Jørgensen wrote:  
> > > >> Alexei Starovoitov <alexei.starovoitov@...il.com> writes:
[...]
> > > > If you disagree please explain _your_ problem again.
> > > > Saying that fb katran is a use case for chaining is, hrm, not correct.  
> > > 
> > > I never said Katran was the driver for this. I just used Katran as one
> > > of the "prior art" examples for my "how are people solving running
> > > multiple programs on the same interface" survey.  
> > 
> > and they solved it. that's the point.
> > 
> > > What I want to achieve is simply the ability to run multiple independent
> > > XDP programs on the same interface, without having to put any
> > > constraints on the programs themselves. I'm not disputing that this is
> > > *possible* to do completely in userspace, I just don't believe the
> > > resulting solution will be very good.  
> > 
> > What makes me uneasy about the whole push for program chaining
> > is that tc cls_bpf supported multiple independent programs from day one.
> > Yet it doesn't help to run two firewalls hooked into tc ingress.
> 
> I do understand your concerns.
> 
> Let me explain why I believe TC cls_bpf multiple independent programs
> have not seen much usage.
> 
> First of all the TC-tool is notorious difficult to use and configure (I
> admit, I struggle with this myself every single time). (The TC layer
> have some amazing features, like hash based lookup, that never get used
> due to this).

We do use cls_bpf heavily in Cilium, but I don't necessarily agree on
the notorious difficult to use aspect (at least for tc + BPF): i) this
is abstracted away from the /user/ entirely to the point that this is an
implementation detail he doesn't need to know about, ii) these days most
access to these hooks is done programmatically, if this is a worry, then
lets simply add a cls_bpf pendant for APIs like bpf_set_link_xdp_fd() we
have in libbpf where you only pass in ifindex, direction (ingress/egress)
and priority of the program so that underneath it sets up cls_act qdisc
with a cls_bpf instance that makes the whole thing foolproof, e.g.:

  int bpf_set_link_tc_fd(int ifindex, int fd, enum bpf_tc_dir dir,
                         __u32 priority, __u32 flags);

The flags could be similar to XDP: 0 or xxx_FLAGS_UPDATE_IF_NOEXIST and
xxx_FLAGS_HW_MODE. The problem that might be easy to miss via tc cmdline
tool is that when you don't specify explicit prio/handle upon tc replace,
then it auto-allocates one and keeps adding new programs instead of
replacing the old ones, but this quirk can be solved via API like above.

> Second, the multiple "independent programs", are actually not
> independent, because the current running program must return
> TC_ACT_UNSPEC to allow next bpf-prog to run.  Thus, it is not really
> usable.

I'd argue that unless the only thing you do in your debugging program is
to introspect (read-only) the packet at the current point, you'd run into
a similar coordination issue, meaning, the "independent programs" works
for simple cases where you only have ACCEPT and DROP policy, such that
you could run through all the programs and have precedence on DROP.

But once you have conflicting policies with regards to how these programs
mangle and redirect packets, how would you handle these? I'd argue it's
a non-trivial task to outsource if /admins/ are supposed to do manual
order adjustments and more importantly to troubleshoot issues due to
them. Potentially debugging hooks would make that easier to avoid
recompilation, but it's more of a developer task.

Often times orchestration tools i) assume they just own the data path
to reduce complexity in an already complex system and ii) also keep
'refreshing' their setup. One random example for the latter is k8s'
kube-proxy that reinstalls its iptables rules every x sec, in order to
make sure there was no manual messing around and to keep the data path
eventually consistent with the daemon view (if they got borked). How
would you make the loader aware of daemons automatically refreshing/
reconfiguring their BPF progs in the situation where admins changed
the pipeline, adding similar handle as tc so whoever does the 'chain'
assembly know which one to update?

> > Similarly cgroup-bpf had a ton discussions on proper multi-prog api.
> > Everyone was eventually convinced that it's flexible and generic.
> > Yet people who started to use it complain that it's missing features
> > to make it truly usable in production.
> 
> I've not looked at the cgroup-bpf multi-prog API, I guess we should to
> understand why this failed.
> 
> > Tracing is the only bit where multi-prog works.
> > Because kernel always runs all programs there.
> 
> This is important insight ("kernel always runs all programs").  A key
> part of Toke's design with chain-calling, is that the kernel always
> runs all the XDP/BPF-progs in the chain. Regardless of the XDP return
> value.  The next program in the chain, need info about previous
> BPF-prog return value, but it can choose to override this.
> 
> > If we could use PROG_RUN_ARRAY for XDP that could have been a solution.
> > But we cannot. Return codes matter for XDP.
> 
> The proposal from Toke, is to allow next-chain BPF-program can override
> the prev BPF-prog return value.  This part of the design, which I must
> admit is also the only option due to tail-calls.  But I do think it
> makes sense, because even if XDP_DROP is returned, then I can install
> another XDP-prog that does XDP_REDIRECT out another interface to an
> analyzer box, or into an AF_XDP based dump tool.

Thanks,
Daniel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ