[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1871cacb-4a43-f906-9a9b-ba6a2ca866dd@solarflare.com>
Date: Mon, 7 Oct 2019 17:43:44 +0100
From: Edward Cree <ecree@...arflare.com>
To: Lorenz Bauer <lmb@...udflare.com>
CC: Toke Høiland-Jørgensen <toke@...hat.com>,
"John Fastabend" <john.fastabend@...il.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Song Liu <songliubraving@...com>,
Daniel Borkmann <daniel@...earbox.net>,
Alexei Starovoitov <ast@...nel.org>, Martin Lau <kafai@...com>,
Yonghong Song <yhs@...com>,
Marek Majkowski <marek@...udflare.com>,
David Miller <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
kernel-team <kernel-team@...udflare.com>
Subject: Re: [PATCH bpf-next 0/9] xdp: Support multiple programs on a single
interface through chain calls
On 04/10/2019 16:58, Lorenz Bauer wrote:
> If you want to support
> all use cases (which you kind of have to) then you'll end up writing an
> RPC wrapper for libbpf,
Yes, you more-or-less would need that. Though I think you can e.g. have
the clients load & pin their own maps and then pass the map fds over
SCM_RIGHTS (though I'm not sure if our current permissions system is
granular enough for that).
> which sounds very painful to me.
I might be being naïve, but it doesn't sound more painful than is normal
for userland. I mean, what operations have you got-
* create/destroy map (maybe, see above)
* load prog (pass it an fd from which it can read an ELF, and more fds
for the maps it uses. Everything else, e.g. BTFs, can just live in the
ELF.)
* destroy prog
* bind prog to hook (admittedly there's a long list of hooks, but this is
only to cover the XDP ones, so basically we just have to specify
interface and generic/driver/hw)
-that doesn't seem like it presents great difficulties?
>> Incidentally, there's also a performance advantage to an eBPF dispatcher,
>> because it means the calls to the individual programs can be JITted and
>> therefore be direct, whereas an in-kernel data-driven dispatcher has to
>> use indirect calls (*waves at spectre*).
> This is if we somehow got full blown calls between distinct eBPF programs?
No, I'm talking about doing a linker step (using the 'full-blown calls'
_within_ an eBPF program that Alexei added a few months back) before the
program is submitted to the kernel. So the BPF_CALL|BPF_PSEUDO_CALL insn
gets JITed to a direct call.
(Although I also think full-blown dynamically-linked calls ought not to be
impossible, *if* we restrict them to taking a ctx and returning a u64, in
which case the callee can be verified as though it were a normal program,
and the caller's verifier just treats the program as returning an unknown
scalar. The devil is in the details, though, and it seems no-one's quite
wanted it enough to do the work required to make it happen.)
>> Maybe Lorenz could describe what he sees as the difficulties with the
>> centralised daemon approach. In what ways is his current "xdpd"
>> solution unsatisfactory?
> xdpd contains the logic to load and install all the different XDP programs
> we have. If we want to change one of them we have to redeploy the whole
> thing. Same if we want to add one. It also makes life-cycle management
> harder than it should be. So our xdpd is not at all like the "loader"
> you envision.
OK, but in that case xdpd isn't evidence that the "loader" approach doesn't
work, so I still think it should be tried before we go to the lengths of
pushing something into the kernel (that we then have to maintain forever).
No promises but I might find the time to put together a strawman
implementation of the loader, to show how I envisage it working.
-Ed
Powered by blists - more mailing lists