[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc19366d-8516-9f2a-b6ed-d9323e9250c9@gmail.com>
Date: Tue, 30 May 2023 16:40:41 +0300
From: Tariq Toukan <ttoukan.linux@...il.com>
To: Jesper Dangaard Brouer <jbrouer@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
John Fastabend <john.fastabend@...il.com>, Jakub Kicinski <kuba@...nel.org>
Cc: brouer@...hat.com, Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>, bpf@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
netdev@...r.kernel.org, Gal Pressman <gal@...dia.com>,
Nimrod Oren <noren@...dia.com>, Tariq Toukan <tariqt@...dia.com>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>, drosen@...gle.com,
Joanne Koong <joannelkoong@...il.com>, henning.fehrmann@....mpg.de,
oliver.behnke@....mpg.de
Subject: Re: [PATCH bpf-next 1/2] samples/bpf: fixup xdp_redirect tool to be
able to support xdp multibuffer
On 30/05/2023 15:40, Jesper Dangaard Brouer wrote:
>
>
> On 30/05/2023 14.17, Tariq Toukan wrote:
>>
>> On 30/05/2023 14:33, Jesper Dangaard Brouer wrote:
>>>
>>>
>>> On 29/05/2023 13.06, Tariq Toukan wrote:
>>>> Expand the xdp multi-buffer support to xdp_redirect tool.
>>>> Similar to what's done in commit
>>>> 772251742262 ("samples/bpf: fixup some tools to be able to support
>>>> xdp multibuffer")
>>>> and its fix commit
>>>> 7a698edf954c ("samples/bpf: Fix MAC address swapping in xdp2_kern").
>>>>
>>>
>>> Have you tested if this cause a performance degradation?
>>>
>>> (Also found possible bug below)
>>>
>>
>> Hi Jesper,
>>
>> This introduces the same known perf degradation we already have in
>> xdp1 and xdp2.
>
> Did a quick test with xdp1, the performance degradation is around 18%.
>
> Before: 22,917,961 pps
> After: 18,798,336 pps
>
> (1-(18798336/22917961))*100 = 17.97%
>
>
>> Unfortunately, this is the API we have today to safely support
>> multi-buffer.
>> Note that both perf and functional (noted below) degradation should be
>> eliminated once replacing the load/store operations with dynptr logic
>> that returns a pointer to the scatter entry instead of copying it.
>>
>
> Well, should we use dynptr logic in this patch then?
>
AFAIU it's not there ready to be used...
Not sure what parts are missing, I'll need to review it a bit deeper.
> Does it make sense to add sample code that does thing in a way that is
> sub-optimal and we want to replace?
> ... (I fear people will copy paste the sample code).
>
I get your point.
As xdp1 and xdp2 are already there, I thought that we'd want to expose
multi-buffer samples in XDP_REDIRECT as well. We use these samples for
internal testing.
>> I initiated a discussion on this topic a few months ago. dynptr was
>> accepted since then, but I'm not aware of any in-progress followup
>> work that addresses this.
>>
>
> Are you saying some more work is needed on dynptr?
>
AFAIU yes.
But I might be wrong... I need to revisit this.
Do you think/know that dynptr can be used immediately?
>>>> Signed-off-by: Tariq Toukan <tariqt@...dia.com>
>>>> Reviewed-by: Nimrod Oren <noren@...dia.com>
>>>> ---
>>>> samples/bpf/xdp_redirect.bpf.c | 16 ++++++++++++----
>>>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/samples/bpf/xdp_redirect.bpf.c
>>>> b/samples/bpf/xdp_redirect.bpf.c
>>>> index 7c02bacfe96b..620163eb7e19 100644
>>>> --- a/samples/bpf/xdp_redirect.bpf.c
>>>> +++ b/samples/bpf/xdp_redirect.bpf.c
>>>> @@ -16,16 +16,21 @@
>>>> const volatile int ifindex_out;
>>>> -SEC("xdp")
>>>> +#define XDPBUFSIZE 64
>>>
>>> Pktgen sample scripts will default send with 60 pkt length, because the
>>> 4 bytes FCS (end-frame checksum) is added by hardware.
>>>
>>> Will this result in an error when bpf_xdp_load_bytes() tries to copy 64
>>> bytes from a 60 bytes packet?
>>>
>>
>> Yes.
>>
>> This can be resolved by reducing XDPBUFSIZE to 60.
>> Need to check if it's OK to disregard these last 4 bytes without
>> hurting the XDP program logic.
>>
>> If so, do you suggest changing xdp1 and xdp2 as well?
>>
>
> I can take care of reducing XDPBUFSIZE to 60 on xpd1 and xdp2, as I
> already had to make these changes for the above quick bench work ;-)
> I'll send out patches shortly.
>
>
Thanks.
Are we fine with the above?
Should we just change the array size to 60 and re-spin?
>>>> +SEC("xdp.frags")
>>>> int xdp_redirect_prog(struct xdp_md *ctx)
>>>> {
>>>> - void *data_end = (void *)(long)ctx->data_end;
>>>> - void *data = (void *)(long)ctx->data;
>>>> + __u8 pkt[XDPBUFSIZE] = {};
>>>> + void *data_end = &pkt[XDPBUFSIZE-1];
>>>> + void *data = pkt;
>>>> u32 key = bpf_get_smp_processor_id();
>>>> struct ethhdr *eth = data;
>>>> struct datarec *rec;
>>>> u64 nh_off;
>>>> + if (bpf_xdp_load_bytes(ctx, 0, pkt, sizeof(pkt)))
>>>> + return XDP_DROP;
>>>
>>> E.g. sizeof(pkt) = 64 bytes here.
>>>
>>>> +
>>>> nh_off = sizeof(*eth);
>>>> if (data + nh_off > data_end)
>>>> return XDP_DROP;
>>>> @@ -36,11 +41,14 @@ int xdp_redirect_prog(struct xdp_md *ctx)
>>>> NO_TEAR_INC(rec->processed);
>>>> swap_src_dst_mac(data);
>>>> + if (bpf_xdp_store_bytes(ctx, 0, pkt, sizeof(pkt)))
>>>> + return XDP_DROP;
>>>> +
>>>> return bpf_redirect(ifindex_out, 0);
>>>> }
>>>> /* Redirect require an XDP bpf_prog loaded on the TX device */
>>>> -SEC("xdp")
>>>> +SEC("xdp.frags")
>>>> int xdp_redirect_dummy_prog(struct xdp_md *ctx)
>>>> {
>>>> return XDP_PASS;
>>>
>>
>
Powered by blists - more mailing lists