[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cf65cc80-f16a-5b76-5577-57c55e952a52@mellanox.com>
Date: Fri, 8 May 2020 16:00:55 +0300
From: Maxim Mikityanskiy <maximmi@...lanox.com>
To: Björn Töpel <bjorn.topel@...el.com>,
Björn Töpel <bjorn.topel@...il.com>
Cc: ast@...nel.org, daniel@...earbox.net, davem@...emloft.net,
kuba@...nel.org, hawk@...nel.org, john.fastabend@...il.com,
netdev@...r.kernel.org, bpf@...r.kernel.org,
magnus.karlsson@...el.com, jonathan.lemon@...il.com,
jeffrey.t.kirsher@...el.com, maciej.fijalkowski@...el.com
Subject: Re: [PATCH bpf-next 10/14] mlx5, xsk: migrate to new
MEM_TYPE_XSK_BUFF_POOL
On 2020-05-08 15:27, Björn Töpel wrote:
> On 2020-05-08 13:55, Maxim Mikityanskiy wrote:
>> On 2020-05-07 13:42, Björn Töpel wrote:
>>> From: Björn Töpel <bjorn.topel@...el.com>
>>>
>>> Use the new MEM_TYPE_XSK_BUFF_POOL API in lieu of MEM_TYPE_ZERO_COPY in
>>> mlx5e. It allows to drop a lot of code from the driver (which is now
>>> common in AF_XDP core and was related to XSK RX frame allocation, DMA
>>> mapping, etc.) and slightly improve performance.
>>>
>>> rfc->v1: Put back the sanity check for XSK params, use XSK API to get
>>> the total headroom size. (Maxim)
>>>
>>> Signed-off-by: Björn Töpel <bjorn.topel@...el.com>
>>> Signed-off-by: Maxim Mikityanskiy <maximmi@...lanox.com>
>>
>> I did some functional and performance tests.
>>
>> Unfortunately, something is wrong with the traffic: I get zeros in
>> XDP_TX, XDP_PASS and XSK instead of packet data. I set DEBUG_HEXDUMP
>> in xdpsock, and it shows the packets of the correct length, but all
>> bytes are 0 after these patches. It might be wrong xdp_buff pointers,
>> however, I still have to investigate it. Björn, does it also affect
>> Intel drivers, or is it Mellanox-specific?
>>
>
> Are you getting zeros for TX, PASS *and* in xdpsock (REDIRECT:ed
> packets), or just TX and PASS?
Yes, in all modes: XDP_TX, XDP_PASS and XDP_REDIRECT to XSK (xdpsock).
> No, I get correct packet data for AF_XDP zero-copy XDP_REDIRECT,
> XDP_PASS, and XDP_TX for Intel.
Hmm, weird - with the new API I expected the same behavior on all
drivers. Thanks for the information, I'll know that I need to look in
mlx5 code to find the issue.
>> For performance, I got +1.0..+1.2 Mpps on RX. TX performance got
>> better after Björn inlined the relevant UMEM functions, however, there
>> is still a slight decrease compared to the old code. I'll try to find
>> the possible reason, but the good thing is that it's not significant
>> anymore.
>>
>
> Ok, so for Rx mlx5 it's the same as for i40e. Good! :-)
>
> How much decrease on Tx?
~0.8 Mpps (was 3.1 before you inlined the functions).
>
> Björn
Powered by blists - more mailing lists