[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250409182237.441532-1-jordan@jrife.io>
Date: Wed, 9 Apr 2025 11:22:29 -0700
From: Jordan Rife <jordan@...fe.io>
To: netdev@...r.kernel.org,
bpf@...r.kernel.org
Cc: Jordan Rife <jordan@...fe.io>,
Aditi Ghag <aditi.ghag@...valent.com>,
Daniel Borkmann <daniel@...earbox.net>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Kuniyuki Iwashima <kuniyu@...zon.com>
Subject: [PATCH v1 bpf-next 0/5] Exactly-once UDP socket iteration
Both UDP and TCP socket iterators use iter->offset to track progress
through a bucket, which is a measure of the number of matching sockets
from the current bucket that have been seen or processed by the
iterator. On subsequent iterations, if the current bucket has
unprocessed items, we skip at least iter->offset matching items in the
bucket before adding any remaining items to the next batch. However,
iter->offset isn't always an accurate measure of "things already seen"
when the underlying bucket changes between reads which can lead to
repeated or skipped sockets. Instead, this series remembers the cookies
of the sockets we haven't seen yet in the current bucket and resumes
from the first cookie in that list that we can find on the next
iteration.
To be more specific, this series replaces struct sock **batch inside
struct bpf_udp_iter_state with union bpf_udp_iter_batch_item *batch,
where union bpf_udp_iter_batch_item can contain either a pointer to a
socket or a socket cookie. During reads, batch contains pointers to all
sockets in the current batch while between reads batch contains all the
cookies of the sockets in the current bucket that have yet to be
processed. On subsequent reads, when iteration resumes,
bpf_iter_udp_batch finds the first saved cookie that matches a socket in
the bucket's socket list and picks up from there to construct the next
batch. On average, assuming it's rare that the next socket disappears
before the next read occurs, we should only need to scan as much as we
did with the offset-based approach to find the starting point. In the
case that the next socket is no longer there, we keep scanning through
the saved cookies list until we find a match. The worst case is when
none of the sockets from last time exist anymore, but again, this should
be rare.
CHANGES
=======
rfc [1] -> v1:
* Use hlist_entry_safe directly to retrieve the first socket in the
current bucket's linked list instead of immediately breaking from
udp_portaddr_for_each_entry (Martin).
* Cancel iteration if bpf_iter_udp_realloc_batch() can't grab enough
memory to contain a full snapshot of the current bucket to prevent
unwanted skips or repeats [2].
[1]: https://lore.kernel.org/bpf/20250404220221.1665428-1-jordan@jrife.io/
[2]: https://lore.kernel.org/bpf/CABi4-ogUtMrH8-NVB6W8Xg_F_KDLq=yy-yu-tKr2udXE2Mu1Lg@mail.gmail.com/
Jordan Rife (5):
bpf: udp: Use bpf_udp_iter_batch_item for bpf_udp_iter_state batch
items
bpf: udp: Avoid socket skips and repeats during iteration
bpf: udp: Propagate ENOMEM up from bpf_iter_udp_batch
selftests/bpf: Return socket cookies from sock_iter_batch progs
selftests/bpf: Add tests for bucket resume logic in UDP socket
iterators
include/linux/udp.h | 3 +
net/ipv4/udp.c | 101 +++-
.../bpf/prog_tests/sock_iter_batch.c | 451 +++++++++++++++++-
.../selftests/bpf/progs/bpf_tracing_net.h | 1 +
.../selftests/bpf/progs/sock_iter_batch.c | 24 +-
5 files changed, 538 insertions(+), 42 deletions(-)
--
2.43.0
Powered by blists - more mailing lists