[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab65545f-c79c-492b-a699-39f7afa984ea@nvidia.com>
Date: Mon, 21 Jul 2025 14:43:15 +0300
From: Nimrod Oren <noren@...dia.com>
To: Mohsin Bashir <mohsin.bashr@...il.com>, netdev@...r.kernel.org
Cc: kuba@...nel.org, andrew+netdev@...n.ch, davem@...emloft.net,
edumazet@...gle.com, pabeni@...hat.com, shuah@...nel.org, horms@...nel.org,
cratiu@...dia.com, cjubran@...dia.com, mbloch@...dia.com,
jdamato@...tly.com, gal@...dia.com, sdf@...ichev.me, ast@...nel.org,
daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com,
nathan@...nel.org, nick.desaulniers+lkml@...il.com, morbo@...gle.com,
justinstitt@...gle.com, bpf@...r.kernel.org,
linux-kselftest@...r.kernel.org, llvm@...ts.linux.dev, tariqt@...dia.com,
thoiland@...hat.com
Subject: Re: [PATCH net-next V6 2/5] selftests: drv-net: Test XDP_PASS/DROP
support
On 19/07/2025 11:30, Mohsin Bashir wrote:
> Test XDP_PASS/DROP in single buffer and multi buffer mode when
> XDP native support is available.
>
> ./drivers/net/xdp.py
> TAP version 13
> 1..4
> ok 1 xdp.test_xdp_native_pass_sb
> ok 2 xdp.test_xdp_native_pass_mb
> ok 3 xdp.test_xdp_native_drop_sb
> ok 4 xdp.test_xdp_native_drop_mb
> \# Totals: pass:4 fail:0 xfail:0 xpass:0 skip:0 error:0
>
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> Signed-off-by: Mohsin Bashir <mohsin.bashr@...il.com>
> ---
> tools/testing/selftests/drivers/net/Makefile | 1 +
> tools/testing/selftests/drivers/net/xdp.py | 303 ++++++++++++++++++
> .../selftests/net/lib/xdp_native.bpf.c | 158 +++++++++
> 3 files changed, 462 insertions(+)
> create mode 100755 tools/testing/selftests/drivers/net/xdp.py
> create mode 100644 tools/testing/selftests/net/lib/xdp_native.bpf.c
>
...
> +
> +static struct udphdr *filter_udphdr(struct xdp_md *ctx, __u16 port)
> +{
> + void *data_end = (void *)(long)ctx->data_end;
> + void *data = (void *)(long)ctx->data;
> + struct udphdr *udph = NULL;
> + struct ethhdr *eth = data;
> +
> + if (data + sizeof(*eth) > data_end)
> + return NULL;
> +
This check assumes that the packet headers reside in the linear part of
the xdp_buff. However, this assumption does not hold across all drivers.
For example, in mlx5, the linear part is empty when using multi-buffer
mode with striding rq configuration. This causes all multi-buffer test
cases to fail over mlx5.
To ensure correctness across all drivers, all direct accesses to packet
data should use these safer helper functions instead:
bpf_xdp_load_bytes() and bpf_xdp_store_bytes().
Related discussion and context can be found here:
https://github.com/xdp-project/xdp-tools/pull/409
> + if (eth->h_proto == bpf_htons(ETH_P_IP)) {
> + struct iphdr *iph = data + sizeof(*eth);
> +
> + if (iph + 1 > (struct iphdr *)data_end ||
> + iph->protocol != IPPROTO_UDP)
> + return NULL;
> +
> + udph = (void *)eth + sizeof(*iph) + sizeof(*eth);
> + } else if (eth->h_proto == bpf_htons(ETH_P_IPV6)) {
> + struct ipv6hdr *ipv6h = data + sizeof(*eth);
> +
> + if (ipv6h + 1 > (struct ipv6hdr *)data_end ||
> + ipv6h->nexthdr != IPPROTO_UDP)
> + return NULL;
> +
> + udph = (void *)eth + sizeof(*ipv6h) + sizeof(*eth);
> + } else {
> + return NULL;
> + }
> +
> + if (udph + 1 > (struct udphdr *)data_end)
> + return NULL;
> +
> + if (udph->dest != bpf_htons(port))
> + return NULL;
> +
> + record_stats(ctx, STATS_RX);
> +
> + return udph;
> +}
Powered by blists - more mailing lists