[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ceac69b-d2ae-91b5-1b24-b02c8faa902b@gmail.com>
Date: Tue, 30 May 2023 15:17:11 +0300
From: Tariq Toukan <ttoukan.linux@...il.com>
To: Jesper Dangaard Brouer <jbrouer@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
John Fastabend <john.fastabend@...il.com>, Jakub Kicinski <kuba@...nel.org>
Cc: brouer@...hat.com, Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>, bpf@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
netdev@...r.kernel.org, Gal Pressman <gal@...dia.com>,
Nimrod Oren <noren@...dia.com>, Tariq Toukan <tariqt@...dia.com>
Subject: Re: [PATCH bpf-next 1/2] samples/bpf: fixup xdp_redirect tool to be
able to support xdp multibuffer
On 30/05/2023 14:33, Jesper Dangaard Brouer wrote:
>
>
> On 29/05/2023 13.06, Tariq Toukan wrote:
>> Expand the xdp multi-buffer support to xdp_redirect tool.
>> Similar to what's done in commit
>> 772251742262 ("samples/bpf: fixup some tools to be able to support xdp
>> multibuffer")
>> and its fix commit
>> 7a698edf954c ("samples/bpf: Fix MAC address swapping in xdp2_kern").
>>
>
> Have you tested if this cause a performance degradation?
>
> (Also found possible bug below)
>
Hi Jesper,
This introduces the same known perf degradation we already have in xdp1
and xdp2.
Unfortunately, this is the API we have today to safely support multi-buffer.
Note that both perf and functional (noted below) degradation should be
eliminated once replacing the load/store operations with dynptr logic
that returns a pointer to the scatter entry instead of copying it.
I initiated a discussion on this topic a few months ago. dynptr was
accepted since then, but I'm not aware of any in-progress followup work
that addresses this.
>> Signed-off-by: Tariq Toukan <tariqt@...dia.com>
>> Reviewed-by: Nimrod Oren <noren@...dia.com>
>> ---
>> samples/bpf/xdp_redirect.bpf.c | 16 ++++++++++++----
>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>
>> diff --git a/samples/bpf/xdp_redirect.bpf.c
>> b/samples/bpf/xdp_redirect.bpf.c
>> index 7c02bacfe96b..620163eb7e19 100644
>> --- a/samples/bpf/xdp_redirect.bpf.c
>> +++ b/samples/bpf/xdp_redirect.bpf.c
>> @@ -16,16 +16,21 @@
>> const volatile int ifindex_out;
>> -SEC("xdp")
>> +#define XDPBUFSIZE 64
>
> Pktgen sample scripts will default send with 60 pkt length, because the
> 4 bytes FCS (end-frame checksum) is added by hardware.
>
> Will this result in an error when bpf_xdp_load_bytes() tries to copy 64
> bytes from a 60 bytes packet?
>
Yes.
This can be resolved by reducing XDPBUFSIZE to 60.
Need to check if it's OK to disregard these last 4 bytes without hurting
the XDP program logic.
If so, do you suggest changing xdp1 and xdp2 as well?
>> +SEC("xdp.frags")
>> int xdp_redirect_prog(struct xdp_md *ctx)
>> {
>> - void *data_end = (void *)(long)ctx->data_end;
>> - void *data = (void *)(long)ctx->data;
>> + __u8 pkt[XDPBUFSIZE] = {};
>> + void *data_end = &pkt[XDPBUFSIZE-1];
>> + void *data = pkt;
>> u32 key = bpf_get_smp_processor_id();
>> struct ethhdr *eth = data;
>> struct datarec *rec;
>> u64 nh_off;
>> + if (bpf_xdp_load_bytes(ctx, 0, pkt, sizeof(pkt)))
>> + return XDP_DROP;
>
> E.g. sizeof(pkt) = 64 bytes here.
>
>> +
>> nh_off = sizeof(*eth);
>> if (data + nh_off > data_end)
>> return XDP_DROP;
>> @@ -36,11 +41,14 @@ int xdp_redirect_prog(struct xdp_md *ctx)
>> NO_TEAR_INC(rec->processed);
>> swap_src_dst_mac(data);
>> + if (bpf_xdp_store_bytes(ctx, 0, pkt, sizeof(pkt)))
>> + return XDP_DROP;
>> +
>> return bpf_redirect(ifindex_out, 0);
>> }
>> /* Redirect require an XDP bpf_prog loaded on the TX device */
>> -SEC("xdp")
>> +SEC("xdp.frags")
>> int xdp_redirect_dummy_prog(struct xdp_md *ctx)
>> {
>> return XDP_PASS;
>
Powered by blists - more mailing lists