[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aFA5hxzOkxVMB_eZ@mini-arch>
Date: Mon, 16 Jun 2025 08:34:31 -0700
From: Stanislav Fomichev <stfomichev@...il.com>
To: Jesper Dangaard Brouer <hawk@...nel.org>
Cc: Daniel Borkmann <borkmann@...earbox.net>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Toke Høiland-Jørgensen <toke@...hat.com>,
Daniel Borkmann <daniel@...earbox.net>, bpf@...r.kernel.org,
netdev@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
Eric Dumazet <eric.dumazet@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>, sdf@...ichev.me,
kernel-team@...udflare.com, arthur@...hurfabre.com,
jakub@...udflare.com, Magnus Karlsson <magnus.karlsson@...el.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
arzeznik@...udflare.com, Yan Zhai <yan@...udflare.com>
Subject: Re: [PATCH bpf-next V1 7/7] net: xdp: update documentation for
xdp-rx-metadata.rst
On 06/13, Jesper Dangaard Brouer wrote:
>
>
>
> On 11/06/2025 05.40, Stanislav Fomichev wrote:
> > On 06/11, Lorenzo Bianconi wrote:
> > > > Daniel Borkmann <daniel@...earbox.net> writes:
> > > >
> > > [...]
> > > > > >
> > > > > > Why not have a new flag for bpf_redirect that transparently stores all
> > > > > > available metadata? If you care only about the redirect -> skb case.
> > > > > > Might give us more wiggle room in the future to make it work with
> > > > > > traits.
> > > > >
> > > > > Also q from my side: If I understand the proposal correctly, in order to fully
> > > > > populate an skb at some point, you have to call all the bpf_xdp_metadata_* kfuncs
> > > > > to collect the data from the driver descriptors (indirect call), and then yet
> > > > > again all equivalent bpf_xdp_store_rx_* kfuncs to re-store the data in struct
> > > > > xdp_rx_meta again. This seems rather costly and once you add more kfuncs with
> > > > > meta data aren't you better off switching to tc(x) directly so the driver can
> > > > > do all this natively? :/
> > > >
> > > > I agree that the "one kfunc per metadata item" scales poorly. IIRC, the
> > > > hope was (back when we added the initial HW metadata support) that we
> > > > would be able to inline them to avoid the function call overhead.
> > > >
> > > > That being said, even with half a dozen function calls, that's still a
> > > > lot less overhead from going all the way to TC(x). The goal of the use
> > > > case here is to do as little work as possible on the CPU that initially
> > > > receives the packet, instead moving the network stack processing (and
> > > > skb allocation) to a different CPU with cpumap.
> > > >
> > > > So even if the *total* amount of work being done is a bit higher because
> > > > of the kfunc overhead, that can still be beneficial because it's split
> > > > between two (or more) CPUs.
> > > >
> > > > I'm sure Jesper has some concrete benchmarks for this lying around
> > > > somewhere, hopefully he can share those :)
> > >
> > > Another possible approach would be to have some utility functions (not kfuncs)
> > > used to 'store' the hw metadata in the xdp_frame that are executed in each
> > > driver codebase before performing XDP_REDIRECT. The downside of this approach
> > > is we need to parse the hw metadata twice if the eBPF program that is bounded
> > > to the NIC is consuming these info. What do you think?
> >
> > That's the option I was asking about. I'm assuming we should be able
> > to reuse existing xmo metadata callbacks for this. We should be able
> > to hide it from the drivers also hopefully.
>
> I'm not against this idea of transparently stores all available metadata
> into the xdp_frame (via some flag/config), but it does not fit our
> production use-case. I also think that this can be added later.
>
> We need the ability to overwrite the RX-hash value, before redirecting
> packet to CPUMAP (remember as cover-letter describe RX-hash needed
> *before* the GRO engine processes the packet in CPUMAP. This is before
> TC/BPF).
Make sense. Can we make GRO not flush a bucket for same_flow=0 instead?
This will also make it work better for other regular tunneled traffic.
Setting hash in BPF to make GRO go fast seems too implementation specific :-(
Powered by blists - more mailing lists