lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250805172831.213ddd8d@kernel.org>
Date: Tue, 5 Aug 2025 17:28:31 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Jesper Dangaard Brouer <hawk@...nel.org>
Cc: Martin KaFai Lau <martin.lau@...ux.dev>, Lorenzo Bianconi
 <lorenzo@...nel.org>, Stanislav Fomichev <stfomichev@...il.com>,
 bpf@...r.kernel.org, netdev@...r.kernel.org, Alexei Starovoitov
 <ast@...nel.org>, Daniel Borkmann <borkmann@...earbox.net>, Eric Dumazet
 <eric.dumazet@...il.com>, "David S. Miller" <davem@...emloft.net>, Paolo
 Abeni <pabeni@...hat.com>, sdf@...ichev.me, kernel-team@...udflare.com,
 arthur@...hurfabre.com, jakub@...udflare.com, Jesse Brandeburg
 <jbrandeburg@...udflare.com>, Andrew Rzeznik <arzeznik@...udflare.com>
Subject: Re: [PATCH bpf-next V2 0/7] xdp: Allow BPF to set RX hints for
 XDP_REDIRECTed packets

On Mon, 4 Aug 2025 15:18:35 +0200 Jesper Dangaard Brouer wrote:
> On 01/08/2025 22.38, Jakub Kicinski wrote:
> > On Thu, 31 Jul 2025 18:27:07 +0200 Jesper Dangaard Brouer wrote:  
> >> I have strong reservations about having the BPF program itself trigger
> >> the SKB allocation. I believe this would fundamentally break the
> >> performance model that makes cpumap redirect so effective.  
> > 
> > See, I have similar concerns about growing struct xdp_frame.
> >   
> 
> IMHO there is a huge difference in doing memory allocs+init vs. growing
> struct xdp_frame.
> 
> It very is important to notice that patchset is actually not growing
> xdp_frame, in the traditional sense, instead we are adding an optional
> area to xdp_frame (plus some flags to tell if area is in-use).  Remember
> the xdp_frame area is not allocated or mem-zeroed (except flags).  If
> not used, the members in struct xdp_rx_meta are never touched.

Yes, I get all that.

> Thus, there is actually no performance impact in growing struct
> xdp_frame in this way. Do you still have concerns?

You're adding code in a number of paths, I don't think it's fair to
claim that there is *no* performance impact. Maybe no impact of
XDP_DROP from the patches themselves, assuming driver doesn't
pre-populate.

Do you have any idea how well this approach will scale to all the fields
people will need in the future to xdp_frame? The nice thing about the
SET ops is that the driver can define whatever ops it supports,
including things not supported by skb (or supported thru skb_ext),
at zero cost to the common stack. If we define the fields in the core
we're back to the inflexibility of the skb world..

> > That's why the guiding principle for me would be to make sure that
> > the features we add, beyond "classic XDP" as needed by DDoS, are
> > entirely optional.   
> 
> Exactly, we agree.  What we do in this patchset is entirely optional.
> These changes does not slowdown "classic XDP" and our DDoS use-case.
> 
> > And if we include the goal of moving skb allocation
> > out of the driver to the xdp_frame growth, the drivers will sooner or
> > later unconditionally populate the xdp_frame. Decreasing performance
> > of "classic XDP"?
> 
> No, that is the beauty of this solution, it will not decrease the
> performance of "classic XDP".
> 
> Do keep-in-mind that "moving skb allocation out of the driver" is not
> part of this patchset and a moonshot goal that will take a long time
> (but we are already "simulation" this via XDP-redirect for years now).
> Drivers should obviously not unconditionally populate the xdp_frame's
> rx_meta area.  It is first time to populate rx_meta, once driver reach
> XDP_PASS case (normal netstack delivery). Today all drivers will at this
> stage populate the SKB metadata (e.g. rx-hash + vlan) from the RX-
> descriptor anyway.  Thus, I don't see how replacing those writes will
> decrease performance.

I don't think it's at all obvious that the driver should not
unconditionally populate the xdp_frame.It seems like the logical
direction to me, TBH. Driver pre-populates, then the conversion
and GET callbacks become trivial and generic..

Perhaps we should try to convert a real driver in this series.

> >> The key to XDP's high performance lies in processing a bulk of
> >> xdp_frames in a tight loop to amortize costs. The existing cpumap code
> >> on the remote CPU is already highly optimized for this: it performs bulk
> >> allocation of SKBs and uses careful prefetching to hide the memory
> >> latency. Allowing a BPF program to sometimes trigger a heavyweight SKB
> >> alloc+init (4 cache-line misses) would bypass all these existing
> >> optimizations. It would introduce significant jitter into the pipeline
> >> and disrupt the entire bulk-processing model we rely on for performance.
> >>
> >> This performance is not just theoretical;  
> > 
> > Somewhat off-topic for the architecture, I think, but do you happen
> > to have any real life data for that? IIRC the "listification" was a
> > moderate success for the skb path.. Or am I misreading and you have
> > other benefits of a tight processing loop in mind?  
> 
> Our "tight processing loop" for NAPI (net_rx_action/napi_pool) is not
> performing as well as we want. One major reason is that the CPU is being
> stalled each time in the loop when the NIC driver needs to clear the 4
> cache-lines for the SKB.  XDP have shown us that avoiding these steps is
> a huge performance boost.

Do you know what uarch resource it's stalling on?
It's been on my minder whether in the attempts to zero out as
little as possible we didn't defeat CPU optimization for clearing
full cache lines.

> The "moving skb allocation out of the driver"
> is one step towards improving the NAPI loop. As you hint we also need
> some bulking or "listification".  I'm not a huge fan of SKB
> "listification". XDP-redirect devmap/cpumap uses an array for creating
> an RX bulk "stage".  The SKB listification work was never fully
> completed IMHO.  Back then, I was working on getting PoC for SKB
> forwarding working, but as soon as we reached any of the netfilter hooks
> points the SKB list would get split into individual SKBs. IIRC SKB
> listification only works for the first part of netstack SKB input code
> path. And "late" part of qdisc TX layer, but the netstack code in-
> between will always cause the SKB list would get split into individual
> SKBs.  IIRC only back-pressure during qdisc TX will cause listification
> to be used. It would be great if someone have cycles to work on
> completing more of the SKB listification.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ