[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aEixEV-nZxb1yjyk@lore-rh-laptop>
Date: Wed, 11 Jun 2025 00:26:25 +0200
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: Daniel Borkmann <daniel@...earbox.net>,
Stanislav Fomichev <stfomichev@...il.com>,
Jesper Dangaard Brouer <hawk@...nel.org>, bpf@...r.kernel.org,
netdev@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <borkmann@...earbox.net>,
Eric Dumazet <eric.dumazet@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>, sdf@...ichev.me,
kernel-team@...udflare.com, arthur@...hurfabre.com,
jakub@...udflare.com, Magnus Karlsson <magnus.karlsson@...el.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Subject: Re: [PATCH bpf-next V1 7/7] net: xdp: update documentation for
xdp-rx-metadata.rst
> Daniel Borkmann <daniel@...earbox.net> writes:
>
[...]
> >>
> >> Why not have a new flag for bpf_redirect that transparently stores all
> >> available metadata? If you care only about the redirect -> skb case.
> >> Might give us more wiggle room in the future to make it work with
> >> traits.
> >
> > Also q from my side: If I understand the proposal correctly, in order to fully
> > populate an skb at some point, you have to call all the bpf_xdp_metadata_* kfuncs
> > to collect the data from the driver descriptors (indirect call), and then yet
> > again all equivalent bpf_xdp_store_rx_* kfuncs to re-store the data in struct
> > xdp_rx_meta again. This seems rather costly and once you add more kfuncs with
> > meta data aren't you better off switching to tc(x) directly so the driver can
> > do all this natively? :/
>
> I agree that the "one kfunc per metadata item" scales poorly. IIRC, the
> hope was (back when we added the initial HW metadata support) that we
> would be able to inline them to avoid the function call overhead.
>
> That being said, even with half a dozen function calls, that's still a
> lot less overhead from going all the way to TC(x). The goal of the use
> case here is to do as little work as possible on the CPU that initially
> receives the packet, instead moving the network stack processing (and
> skb allocation) to a different CPU with cpumap.
>
> So even if the *total* amount of work being done is a bit higher because
> of the kfunc overhead, that can still be beneficial because it's split
> between two (or more) CPUs.
>
> I'm sure Jesper has some concrete benchmarks for this lying around
> somewhere, hopefully he can share those :)
Another possible approach would be to have some utility functions (not kfuncs)
used to 'store' the hw metadata in the xdp_frame that are executed in each
driver codebase before performing XDP_REDIRECT. The downside of this approach
is we need to parse the hw metadata twice if the eBPF program that is bounded
to the NIC is consuming these info. What do you think?
Regards,
Lorenzo
>
> > Also, have you thought about taking the opportunity to generalize the existing
> > struct xsk_tx_metadata? It would be nice to actually use the same/similar struct
> > for RX and TX, similarly as done in struct virtio_net_hdr. Such that we have
> > XDP_{RX,TX}_METADATA and XDP_{RX,TX}MD_FLAGS_* to describe what meta data we
> > have and from a developer PoV this will be a nicely consistent API in XDP. Then
> > you could store at the right location in the meta data region just with
> > bpf_xdp_metadata_* kfuncs (and/or plain BPF code) and finally set XDP_RX_METADATA
> > indicator bit.
>
> Wouldn't this make the whole thing (effectively) UAPI?
>
> -Toke
>
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists