[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cc24e860-7d6f-7ec8-49cb-a49cb066f618@kernel.org>
Date: Thu, 3 Aug 2023 14:55:37 +0200
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Wei Fang <wei.fang@....com>, Jesper Dangaard Brouer <jbrouer@...hat.com>,
Jakub Kicinski <kuba@...nel.org>
Cc: "brouer@...hat.com" <brouer@...hat.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"pabeni@...hat.com" <pabeni@...hat.com>, Shenwei Wang
<shenwei.wang@....com>, Clark Wang <xiaoning.wang@....com>,
"ast@...nel.org" <ast@...nel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"john.fastabend@...il.com" <john.fastabend@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
dl-linux-imx <linux-imx@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>, Andrew Lunn <andrew@...n.ch>
Subject: Re: [PATCH V3 net-next] net: fec: add XDP_TX feature support
On 03/08/2023 13.18, Wei Fang wrote:
>> On 03/08/2023 05.58, Wei Fang wrote:
>>>>> } else {
>>>>> - xdp_return_frame(xdpf);
>>>>> + xdp_return_frame_rx_napi(xdpf);
>>>> If you implement Jesper's syncing suggestions, I think you can use
>>>>
>>>> page_pool_put_page(pool, page, 0, true);
>> To Jakub, using 0 here you are trying to bypass the DMA-sync (which is valid
>> as driver knows XDP_TX have already done the sync).
>> The code will still call into DMA-sync calls with zero as size, so wonder if we
>> should detect size zero and skip that call?
>> (I mean is this something page_pool should support.)
>>
[...]
>>
>>
>>>> for XDP_TX here to avoid the DMA sync on page recycle.
>>> I tried Jasper's syncing suggestion and used page_pool_put_page() to
>>> recycle pages, but the results does not seem to improve the
>>> performance of XDP_TX,
>> The optimization will only have effect on those devices which have
>> dev->dma_coherent=false else DMA function [1] (e.g.
>> dma_direct_sync_single_for_device) will skip the sync calls.
>>
>> [1] https://elixir.bootlin.com/linux/v6.5-rc4/source/kernel/dma/direct.h#L63
>>
>> (Cc. Andrew Lunn)
>> Does any of the imx generations have dma-noncoherent memory?
>>
>> And does any of these use the fec NIC driver?
>>
>>> it even degrades the speed.
>>
>> Could be low runs simply be a variation between your test runs?
>>
> Maybe, I just tested once before. So I test several times again, the
> results of the two methods do not seem to be much different so far,
> both about 255000 pkt/s.
>
>> The specific device (imx8mpevk) this was tested on, clearly have
>> dma_coherent=true, or else we would have seen a difference.
>> But the code change should not have any overhead for the
>> dma_coherent=true case, the only extra overhead is the extra empty DMA
>> sync call with size zero (as discussed in top).
>>
> The FEC of i.MX8MP-EVK has dma_coherent=false, and as I mentioned
> above, I did not see an obvious difference in the performance. :(
That is surprising - given the results.
(see below, lack of perf/diff might be caused by Ethernet flow-control).
>
>>> The result of the current modification.
>>> root@...8mpevk:~# ./xdp2 eth0
>>> proto 17: 260180 pkt/s
>>
>> These results are*significantly* better than reported in patch-1.
>> What happened?!?
>>
> The test environment is slightly different, in patch-1, the FEC port was
> directly connected to the port of another board. But in the latest test,
> the ports of the two boards were connected to a switch, so the ports of
> the two boards are not directly connected.
>
Hmm, I've seen this kind of perf behavior of direct-connected or via
switch before. The mistake I made was, that I had not disabled Ethernet
flow-control. The xdp2 XDP_TX program will swap the mac addresses, and
send the packet back to the packet generator (running pktgen), which
will get overloaded itself and starts sending Ethernet flow-control
pause frames.
Command line to disable:
# ethtool -A eth0 rx off tx off
Can I ask/get you to make sure that Ethernet flow-control is disabled
(on both generator and DUT (to be on safe-side)) and run the test again?
--Jesper
>> e.g.
>> root@...8mpevk:~# ./xdp2 eth0
>> proto 17: 135817 pkt/s
>> proto 17: 142776 pkt/s
>>
>>> proto 17: 260373 pkt/s
>>> proto 17: 260363 pkt/s
>>> proto 17: 259036 pkt/s
[...]
>>>
>>> After using the sync suggestion, the result shows as follow.
>>> root@...8mpevk:~# ./xdp2 eth0
>>> proto 17: 255956 pkt/s
>>> proto 17: 255841 pkt/s
>>> proto 17: 255835 pkt/s
Powered by blists - more mailing lists