lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 30 Jun 2023 17:52:05 -0700
From: John Fastabend <john.fastabend@...il.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>, 
 Jakub Kicinski <kuba@...nel.org>, 
 John Fastabend <john.fastabend@...il.com>
Cc: Stanislav Fomichev <sdf@...gle.com>, 
 Alexei Starovoitov <alexei.starovoitov@...il.com>, 
 Donald Hunter <donald.hunter@...il.com>, 
 bpf <bpf@...r.kernel.org>, 
 Alexei Starovoitov <ast@...nel.org>, 
 Daniel Borkmann <daniel@...earbox.net>, 
 Andrii Nakryiko <andrii@...nel.org>, 
 Martin KaFai Lau <martin.lau@...ux.dev>, 
 Song Liu <song@...nel.org>, 
 Yonghong Song <yhs@...com>, 
 KP Singh <kpsingh@...nel.org>, 
 Hao Luo <haoluo@...gle.com>, 
 Jiri Olsa <jolsa@...nel.org>, 
 Network Development <netdev@...r.kernel.org>
Subject: Re: [RFC bpf-next v2 11/11] net/mlx5e: Support TX timestamp metadata

Toke Høiland-Jørgensen wrote:
> Jakub Kicinski <kuba@...nel.org> writes:
> 
> > On Tue, 27 Jun 2023 14:43:57 -0700 John Fastabend wrote:
> >> What I think would be the most straight-forward thing and most flexible
> >> is to create a <drvname>_devtx_submit_skb(<drivname>descriptor, sk_buff)
> >> and <drvname>_devtx_submit_xdp(<drvname>descriptor, xdp_frame) and then
> >> corresponding calls for <drvname>_devtx_complete_{skb|xdp}() Then you
> >> don't spend any cycles building the metadata thing or have to even
> >> worry about read kfuncs. The BPF program has read access to any
> >> fields they need. And with the skb, xdp pointer we have the context
> >> that created the descriptor and generate meaningful metrics.
> >
> > Sorry but this is not going to happen without my nack. DPDK was a much
> > cleaner bifurcation point than trying to write datapath drivers in BPF.
> > Users having to learn how to render descriptors for all the NICs
> > and queue formats out there is not reasonable. Isovalent hired

I would expect BPF/driver experts would write the libraries for the
datapath API that the network/switch developer is going to use. I would
even put the BPF programs in kernel and ship them with the release
if that helps.

We have different visions on who the BPF user is that writes XDP
programs I think.

> > a lot of former driver developers so you may feel like it's a good
> > idea, as a middleware provider. But for the rest of us the matrix
> > of HW x queue format x people writing BPF is too large. If we can

Its nice though that we have good coverage for XDP so the matrix
is big. Even with kfuncs though we need someone to write support.
My thought is its just a question of if they write it in BPF
or in C code as a reader kfunc. I suspect for these advanced features
its only a subset at least upfront. Either way BPF or C you are
stuck finding someone to write that code.

> > write some poor man's DPDK / common BPF driver library to be selected
> > at linking time - we can as well provide a generic interface in
> > the kernel itself. Again, we never merged explicit DPDK support, 
> > your idea is strictly worse.
> 
> I agree: we're writing an operating system kernel here. The *whole
> point* of an operating system is to provide an abstraction over
> different types of hardware and provide a common API so users don't have
> to deal with the hardware details.

And just to be clear what we sacrifice then is forwards/backwards
portability. If its a kernel kfunc we need to add a kfunc for
every field we want to read and it will only be available then.
Further, it will need some general agreement that its useful for
it to be added. A hardware vendor wont be able to add some arbitrary
field and get access to it. So we lose this by doing kfuncs.

Its pushing complexity into the kernel that we maintain in kernel
when we could push the complexity into BPF and maintain as user
space code and BPF codes. Its a choice to make I think.

Also abstraction can cost cycles. Here we have to prepare the
structure and call kfunc. The kfunc can be inlined if folks
do the work. It may be small cost but not free.

> 
> I feel like there's some tension between "BPF as a dataplane API" and
> "BPF as a kernel extension language" here, especially as the BPF

Agree. I'm obviously not maximizing for ease of use for the dataplane
API as BPF. IMO though even with the kfunc abstraction its niche work
writing low level datapath code that requires exposing a user
API higher up the stack. With a DSL (P4, ...) for example you could 
abstract away the complexity and then compile down into these
details. Or if you like tables an Openflow style table interface
would provide a table API.

> subsystem has grown more features in the latter direction. In my mind,
> XDP is still very much a dataplane API; in fact that's one of the main
> selling points wrt DPDK: you can get high performance networking but
> still take advantage of the kernel drivers and other abstractions that

I think we agree on the goal a fast datapath for the nic.

> the kernel provides. If you're going for raw performance and the ability
> to twiddle every tiny detail of the hardware, DPDK fills that niche
> quite nicely (and also shows us the pains of going that route).

Summary on my side is we minimize kernel complexity
by raw descriptor reads, we don't need to know what we
want to read in the future and we need folks who understand
the hardware regardless of where the code lives in BPF
or C. C certainly helps the picking what kfunc to use
but we also have BTF that solves this struct/offset problem
for non-networking use cases already.

> 
> -Toke
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ