[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZqfpGVhBe3zt0x-K@lore-desk>
Date: Mon, 29 Jul 2024 21:10:17 +0200
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: Elad Yifee <eladwf@...il.com>
Cc: Felix Fietkau <nbd@....name>, Sean Wang <sean.wang@...iatek.com>,
Mark Lee <Mark-MC.Lee@...iatek.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Matthias Brugger <matthias.bgg@...il.com>,
AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
linux-mediatek@...ts.infradead.org,
Daniel Golle <daniel@...rotopia.org>,
Joe Damato <jdamato@...tly.com>
Subject: Re: [PATCH net-next v2 0/2] net: ethernet: mtk_eth_soc: improve RX
performance
> This small series includes two short and simple patches to improve RX performance
> on this driver.
Hi Elad,
What is the chip revision you are running?
If you are using a device that does not support HW-LRO (e.g. MT7986 or
MT7988), I guess we can try to use page_pool_dev_alloc_frag() APIs and
request a 2048B buffer. Doing so, we can use use a single page for two
rx buffers improving recycling with page_pool. What do you think?
Regards,
Lorenzo
>
> iperf3 result without these patches:
> [ ID] Interval Transfer Bandwidth
> [ 4] 0.00-1.00 sec 563 MBytes 4.72 Gbits/sec
> [ 4] 1.00-2.00 sec 563 MBytes 4.73 Gbits/sec
> [ 4] 2.00-3.00 sec 552 MBytes 4.63 Gbits/sec
> [ 4] 3.00-4.00 sec 561 MBytes 4.70 Gbits/sec
> [ 4] 4.00-5.00 sec 562 MBytes 4.71 Gbits/sec
> [ 4] 5.00-6.00 sec 565 MBytes 4.74 Gbits/sec
> [ 4] 6.00-7.00 sec 563 MBytes 4.72 Gbits/sec
> [ 4] 7.00-8.00 sec 565 MBytes 4.74 Gbits/sec
> [ 4] 8.00-9.00 sec 562 MBytes 4.71 Gbits/sec
> [ 4] 9.00-10.00 sec 558 MBytes 4.68 Gbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval Transfer Bandwidth
> [ 4] 0.00-10.00 sec 5.48 GBytes 4.71 Gbits/sec sender
> [ 4] 0.00-10.00 sec 5.48 GBytes 4.71 Gbits/sec receiver
>
> iperf3 result with "use prefetch methods" patch:
> [ ID] Interval Transfer Bandwidth
> [ 4] 0.00-1.00 sec 598 MBytes 5.02 Gbits/sec
> [ 4] 1.00-2.00 sec 588 MBytes 4.94 Gbits/sec
> [ 4] 2.00-3.00 sec 592 MBytes 4.97 Gbits/sec
> [ 4] 3.00-4.00 sec 594 MBytes 4.98 Gbits/sec
> [ 4] 4.00-5.00 sec 590 MBytes 4.95 Gbits/sec
> [ 4] 5.00-6.00 sec 594 MBytes 4.98 Gbits/sec
> [ 4] 6.00-7.00 sec 594 MBytes 4.98 Gbits/sec
> [ 4] 7.00-8.00 sec 593 MBytes 4.98 Gbits/sec
> [ 4] 8.00-9.00 sec 593 MBytes 4.98 Gbits/sec
> [ 4] 9.00-10.00 sec 594 MBytes 4.98 Gbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval Transfer Bandwidth
> [ 4] 0.00-10.00 sec 5.79 GBytes 4.98 Gbits/sec sender
> [ 4] 0.00-10.00 sec 5.79 GBytes 4.98 Gbits/sec receiver
>
> iperf3 result with "use PP exclusively for XDP programs" patch:
> [ ID] Interval Transfer Bandwidth
> [ 4] 0.00-1.00 sec 635 MBytes 5.33 Gbits/sec
> [ 4] 1.00-2.00 sec 636 MBytes 5.33 Gbits/sec
> [ 4] 2.00-3.00 sec 637 MBytes 5.34 Gbits/sec
> [ 4] 3.00-4.00 sec 636 MBytes 5.34 Gbits/sec
> [ 4] 4.00-5.00 sec 637 MBytes 5.34 Gbits/sec
> [ 4] 5.00-6.00 sec 637 MBytes 5.35 Gbits/sec
> [ 4] 6.00-7.00 sec 637 MBytes 5.34 Gbits/sec
> [ 4] 7.00-8.00 sec 636 MBytes 5.33 Gbits/sec
> [ 4] 8.00-9.00 sec 634 MBytes 5.32 Gbits/sec
> [ 4] 9.00-10.00 sec 637 MBytes 5.34 Gbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval Transfer Bandwidth
> [ 4] 0.00-10.00 sec 6.21 GBytes 5.34 Gbits/sec sender
> [ 4] 0.00-10.00 sec 6.21 GBytes 5.34 Gbits/sec receiver
>
> iperf3 result with both patches:
> [ ID] Interval Transfer Bandwidth
> [ 4] 0.00-1.00 sec 652 MBytes 5.47 Gbits/sec
> [ 4] 1.00-2.00 sec 653 MBytes 5.47 Gbits/sec
> [ 4] 2.00-3.00 sec 654 MBytes 5.48 Gbits/sec
> [ 4] 3.00-4.00 sec 654 MBytes 5.49 Gbits/sec
> [ 4] 4.00-5.00 sec 653 MBytes 5.48 Gbits/sec
> [ 4] 5.00-6.00 sec 653 MBytes 5.48 Gbits/sec
> [ 4] 6.00-7.00 sec 653 MBytes 5.48 Gbits/sec
> [ 4] 7.00-8.00 sec 653 MBytes 5.48 Gbits/sec
> [ 4] 8.00-9.00 sec 653 MBytes 5.48 Gbits/sec
> [ 4] 9.00-10.00 sec 654 MBytes 5.48 Gbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval Transfer Bandwidth
> [ 4] 0.00-10.00 sec 6.38 GBytes 5.48 Gbits/sec sender
> [ 4] 0.00-10.00 sec 6.38 GBytes 5.48 Gbits/sec receiver
>
> About 16% more packets/sec without XDP program loaded,
> and about 5% more packets/sec when using PP.
> Tested on Banana Pi BPI-R4 (MT7988A)
>
> ---
> Technically, this is version 2 of the “use prefetch methods” patch.
> Initially, I submitted it as a single patch for review (RFC),
> but later I decided to include a second patch, resulting in this series
> Changes in v2:
> - Add "use PP exclusively for XDP programs" patch and create this series
> ---
> Elad Yifee (2):
> net: ethernet: mtk_eth_soc: use prefetch methods
> net: ethernet: mtk_eth_soc: use PP exclusively for XDP programs
>
> drivers/net/ethernet/mediatek/mtk_eth_soc.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> --
> 2.45.2
>
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists