[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1671049840.git.dxu@dxuuu.xyz>
Date: Wed, 14 Dec 2022 16:25:27 -0700
From: Daniel Xu <dxu@...uu.xyz>
To: davem@...emloft.net, yoshfuji@...ux-ipv6.org, dsahern@...nel.org,
martin.lau@...ux.dev, daniel@...earbox.net, ast@...nel.org,
andrii@...nel.org
Cc: ppenkov@...atrix.com, dbird@...atrix.com, netdev@...r.kernel.org,
bpf@...r.kernel.org
Subject: [PATCH bpf-next 0/6] Support defragmenting IPv4 packets in BPF
=== Context ===
In the context of a middlebox, fragmented packets are tricky to handle.
The full 5-tuple of a packet is often only available in the first
fragment which makes enforcing consistent policy difficult. There are
really only two stateless options, neither of which are very nice:
1. Enforce policy on first fragment and accept all subsequent fragments.
This works but may let in certain attacks or allow data exfiltration.
2. Enforce policy on first fragment and drop all subsequent fragments.
This does not really work b/c some protocols may rely on
fragmentation. For example, DNS may rely on oversized UDP packets for
large responses.
So stateful tracking is the only sane option. RFC 8900 [0] calls this
out as well in section 6.3:
Middleboxes [...] should process IP fragments in a manner that is
consistent with [RFC0791] and [RFC8200]. In many cases, middleboxes
must maintain state in order to achieve this goal.
=== BPF related bits ===
However, when policy is enforced through BPF, the prog is run before the
kernel reassembles fragmented packets. This leaves BPF developers in a
awkward place: implement reassembly (possibly poorly) or use a stateless
method as described above.
Fortunately, the kernel has robust support for fragmented ipv4 packets.
This patchset wraps the existing defragmentation facilities in a kfunc so
that BPF progs running on middleboxes can reassemble fragmented packets
before applying policy.
=== Patchset details ===
This patchset is (hopefully) relatively straightforward from BPF perspective.
One thing I'd like to call out is the skb_copy()ing of the prog skb. I
did this to maintain the invariant that the ctx remains valid after prog
has run. This is relevant b/c ip_defrag() and ip_check_defrag() may
consume the skb if the skb is a fragment.
Originally I did play around with teaching the verifier about kfuncs
that may consume the ctx and disallowing ctx accesses in ret != 0
branches. It worked ok, but it seemed too complex to modify the
surrounding assumptions about ctx validity.
[0]: https://datatracker.ietf.org/doc/html/rfc8900
Daniel Xu (6):
ip: frags: Return actual error codes from ip_check_defrag()
bpf: verifier: Support KF_CHANGES_PKT flag
bpf, net, frags: Add bpf_ip_check_defrag() kfunc
bpf: selftests: Support not connecting client socket
bpf: selftests: Support custom type and proto for client sockets
bpf: selftests: Add bpf_ip_check_defrag() selftest
Documentation/bpf/kfuncs.rst | 7 +
drivers/net/macvlan.c | 2 +-
include/linux/btf.h | 1 +
include/net/ip.h | 11 +
kernel/bpf/verifier.c | 8 +
net/ipv4/Makefile | 1 +
net/ipv4/ip_fragment.c | 13 +-
net/ipv4/ip_fragment_bpf.c | 98 ++++++
net/packet/af_packet.c | 2 +-
.../selftests/bpf/generate_udp_fragments.py | 52 +++
tools/testing/selftests/bpf/network_helpers.c | 26 +-
tools/testing/selftests/bpf/network_helpers.h | 3 +
.../bpf/prog_tests/ip_check_defrag.c | 296 ++++++++++++++++++
.../selftests/bpf/progs/bpf_tracing_net.h | 1 +
.../selftests/bpf/progs/ip_check_defrag.c | 83 +++++
15 files changed, 589 insertions(+), 15 deletions(-)
create mode 100644 net/ipv4/ip_fragment_bpf.c
create mode 100755 tools/testing/selftests/bpf/generate_udp_fragments.py
create mode 100644 tools/testing/selftests/bpf/prog_tests/ip_check_defrag.c
create mode 100644 tools/testing/selftests/bpf/progs/ip_check_defrag.c
--
2.39.0
Powered by blists - more mailing lists