lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874l07fu61.fsf@toke.dk>
Date:   Thu, 17 Oct 2019 14:11:50 +0200
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     Edward Cree <ecree@...arflare.com>,
        John Fastabend <john.fastabend@...il.com>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     Daniel Borkmann <daniel@...earbox.net>,
        Alexei Starovoitov <ast@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        Marek Majkowski <marek@...udflare.com>,
        Lorenz Bauer <lmb@...udflare.com>,
        Alan Maguire <alan.maguire@...cle.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
        bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next v3 1/5] bpf: Support chain calling multiple BPF programs after each other

Edward Cree <ecree@...arflare.com> writes:

> On 15/10/2019 17:42, Toke Høiland-Jørgensen wrote:
>> Edward Cree <ecree@...arflare.com> writes:
>>> On 14/10/2019 19:48, Toke Høiland-Jørgensen wrote:
>>>> So that will end up with a single monolithic BPF program being loaded
>>>> (from the kernel PoV), right? That won't do; we want to be able to go
>>>> back to the component programs, and manipulate them as separate kernel
>>>> objects.
>>> Why's that? (Since it also applies to the static-linking PoC I'm
>>> putting together.) What do you gain by having the components be
>>> kernel-visible?
>> Because then userspace will have to keep state to be able to answer
>> questions like "show me the list of programs that are currently loaded
>> (and their call chain)", or do operations like "insert this program into
>> the call chain at position X".
> Userspace keeps state for stuff all the time.  We call them "daemons" ;)
> Now you might have arguments for why putting a given piece of state in
>  userspace is a bad idea — there's a reason why not everything is a
>  microkernel — but those arguments need to be made.
>
>> We already keep all this state in the kernel,
> The kernel keeps the state of "current (monolithic) BPF program loaded
>  (against each hook)".  Prior to this patch series, the kernel does
>  *not* keep any state on what that BPF program was made of (except in
>  the sense of BTF debuginfos, which a linker could combine appropriately).
>
> So if we _don't_ add your chained-programs functionality into the kernel,
>  and then _do_ implement userspace linking, then there isn't any
>  duplicated functionality or even duplicated state — the userland state
>  is "what are my components and what's the linker invocation that glues
>  them together", the kernel state is "here is one monolithic BPF blob,
>  along with a BTF blob to debug it".  The kernel knows nothing of the
>  former, and userspace doesn't store (but knows how to recreate) the
>  latter.

I think there's a conceptual disconnect here in how we view what an XDP
program is. In my mind, an XDP program is a stand-alone entity tied to a
particular application; not a library function that can just be inserted
into another program. Thus, what you're proposing sounds to me like the
equivalent of saying "we don't want to do process management in the
kernel; the init process should just link in all the programs userspace
wants to run". Which is technically possible; but that doesn't make it a
good idea.

Setting aside that for a moment; the reason I don't think this belongs
in userspace is that putting it there would carry a complexity cost that
is higher than having it in the kernel. Specifically, if we do implement
an 'xdpd' daemon to handle all this, that would mean that we:

- Introduce a new, separate code base that we'll have to write, support
  and manage updates to.

- Add a new dependency to using XDP (now you not only need the kernel
  and libraries, you'll also need the daemon).

- Have to duplicate or wrap functionality currently found in the kernel;
  at least:
  
    - Keeping track of which XDP programs are loaded and attached to
      each interface (as well as the "new state" of their attachment
      order).

    - Some kind of interface with the verifier; if an app does
      xdpd_rpc_load(prog), how is the verifier result going to get back
      to the caller?

- Have to deal with state synchronisation issues (how does xdpd handle
  kernel state changing from underneath it?).


While these are issues that are (probably) all solvable, I think the
cost of solving them is far higher than putting the support into the
kernel. Which is why I think kernel support is the best solution :)

-Toke

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ