lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 11 Nov 2019 18:51:14 -0800
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Toke Høiland-Jørgensen <toke@...hat.com>
Cc:     Edward Cree <ecree@...arflare.com>,
        John Fastabend <john.fastabend@...il.com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Alexei Starovoitov <ast@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        Marek Majkowski <marek@...udflare.com>,
        Lorenz Bauer <lmb@...udflare.com>,
        Alan Maguire <alan.maguire@...cle.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
        bpf@...r.kernel.org
Subject: static and dynamic linking. Was: [PATCH bpf-next v3 1/5] bpf:
 Support chain calling multiple BPF

On Tue, Oct 22, 2019 at 08:07:42PM +0200, Toke Høiland-Jørgensen wrote:
> 
> I believe this is what Alexei means by "indirect calls". That is
> different, though, because it implies that each program lives as a
> separate object in the kernel - and so it might actually work. What you
> were talking about (until this paragraph) was something that was
> entirely in userspace, and all the kernel sees is a blob of the eBPF
> equivalent of `cat *.so > my_composite_prog.so`.

So I've looked at indirect calls and realized that they're _indirect_ calls.
The retpoline overhead will be around, so a solution has to work without them.
I still think they're necessary for all sorts of things, but priority shifted.

I think what Ed is proposing with static linking is the best generic solution.
The chaining policy doesn't belong in the kernel. A user space can express the
chaining logic in the form of BPF program. Static linking achieves that. There
could be a 'root' bpf program (let's call it rootlet.o) that looks like:
int xdp_firewall_placeholder1(struct xdp_md *ctx)
{
   return XDP_PASS;
}
int xdp_firewall_placeholder2(struct xdp_md *ctx)
{
   return XDP_PASS;
}
int xdp_load_balancer_placeholder1(struct xdp_md *ctx)
{
   return XDP_PASS;
}
int main_xdp_prog(struct xdp_md *ctx)
{
   int ret;

   ret = xdp_firewall_placeholder1(ctx);
   switch (ret) {
   case XDP_PASS: break;
   case XDP_PROP: return XDP_DROP;
   case XDP_TX: case XDP_REDIRECT:
      /* buggy firewall */
      bpf_perf_event_output(ctx,...);
   default: break; /* or whatever else */
   }
   
   ret = xdp_firewall_placeholder2(ctx);
   switch (ret) {
   case XDP_PASS: break;
   case XDP_PROP: return XDP_DROP;
   default: break;
   }

   ret = xdp_load_balancer_placeholder1(ctx);
   switch (ret) {
   case XDP_PASS: break;
   case XDP_PROP: return XDP_DROP;
   case XDP_TX: return XDP_TX;
   case XDP_REDIRECT: return XDP_REDIRECT;
   default: break; /* or whatever else */
   }
   return XDP_PASS;
}

When firewall1.rpm is installed it needs to use either a central daemon or
common library (let's call it libxdp.so) that takes care of orchestration. The
library would need to keep a state somewhere (like local file or a database).
The state will include rootlet.o and new firewall1.o that wants to be linked
into the existing program chain. When firewall2.rpm gets installed it calls the
same libxdp.so functions that operate on shared state. libxdp.so needs to link
firewall1.o + firewall2.o + rootlet.o into one program and attach it to netdev.
This is static linking. The existing kernel infrastructure already supports
such model and I think it's enough for a lot of use cases. In particular fb's
firewall+katran XDP style will fit right in. But bpf_tail_calls are
incompatible with bpf2bpf calls that static linking will use and I think
cloudlfare folks expressed the interest to use them for some reason even within
single firewall ? so we need to improve the model a bit.

We can introduce dynamic linking. The second part of 'BPF trampoline' patches
allows tracing programs to attach to other BPF programs. The idea of dynamic
linking is to replace a program or subprogram instead of attaching to it.
The firewall1.rpm application will still use libxdp.so, but instead of statically
linking it will ask kernel to replace a subprogram rootlet_fd +
btf_id_of_xdp_firewall_placeholder1 with new firewall1.o. The same interface is
used for attaching tracing prog to networking prog. Initially I plan to keep
the verifier job simple and allow replacing xdp-equivalent subprogram with xdp
program. Meaning that subprogram (in above case xdp_firewall_placeholder1)
needs to have exactly one argument and it has to be 'struct xdp_md *'. Then
during the loading of firewall1.o the verifier wouldn't need to re-verify the
whole thing. BTF type matching that the verifier is doing as part of 'BPF
trampoline' series will be reused for this purpose. Longer term I'd like to
allow more than one argument while preserving partial verification model.
The rootlet.o calls into firewall1.o directly. So no retpoline to worry about
and firewall1.o can use bpf_tail_call() if it wants so. That tail_call will
still return back to rootlet.o which will make policy decision. This rootlet.o
can be automatically generated by libxdp.so. If in the future we figure out how
to do two load-balancers libxdp.so will be able to accommodate that new policy.
This firewall1.o can be developed and tested independently of other xdp
programs. The key gotcha here is that the verifier needs to allow more than 512
stack usage for the rootlet.o. I think that's acceptable.

In the future indirect calls will allow rootlet.o to be cleaner:
typedef int (*ptr_to_xdp_prog)(struct xdp_md *ctx);
ptr_to_xdp_prog prog_array[100];
int main_xdp_prog(struct xdp_md *ctx)
{
   int ret, i;

   for (i = 0; i < 100; i++) {
       ret = prog_array[i](ctx);
       switch (ret) {
       case XDP_PASS: break;
       case XDP_PROP: return XDP_DROP;
       ..
   }
}
but they're indirect calls and retpoline. Hence lower priority atm.

Thoughts?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ