lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240729183038.1959-1-eladwf@gmail.com>
Date: Mon, 29 Jul 2024 21:29:53 +0300
From: Elad Yifee <eladwf@...il.com>
To: Felix Fietkau <nbd@....name>,
	Sean Wang <sean.wang@...iatek.com>,
	Mark Lee <Mark-MC.Lee@...iatek.com>,
	Lorenzo Bianconi <lorenzo@...nel.org>,
	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>,
	Paolo Abeni <pabeni@...hat.com>,
	Matthias Brugger <matthias.bgg@...il.com>,
	AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>,
	netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org,
	linux-mediatek@...ts.infradead.org
Cc: Elad Yifee <eladwf@...il.com>,
	Daniel Golle <daniel@...rotopia.org>,
	Joe Damato <jdamato@...tly.com>
Subject: [PATCH net-next v2 0/2] net: ethernet: mtk_eth_soc: improve RX performance

This small series includes two short and simple patches to improve RX performance
on this driver.

iperf3 result without these patches:
	[ ID] Interval           Transfer     Bandwidth
	[  4]   0.00-1.00   sec   563 MBytes  4.72 Gbits/sec
	[  4]   1.00-2.00   sec   563 MBytes  4.73 Gbits/sec
	[  4]   2.00-3.00   sec   552 MBytes  4.63 Gbits/sec
	[  4]   3.00-4.00   sec   561 MBytes  4.70 Gbits/sec
	[  4]   4.00-5.00   sec   562 MBytes  4.71 Gbits/sec
	[  4]   5.00-6.00   sec   565 MBytes  4.74 Gbits/sec
	[  4]   6.00-7.00   sec   563 MBytes  4.72 Gbits/sec
	[  4]   7.00-8.00   sec   565 MBytes  4.74 Gbits/sec
	[  4]   8.00-9.00   sec   562 MBytes  4.71 Gbits/sec
	[  4]   9.00-10.00  sec   558 MBytes  4.68 Gbits/sec
	- - - - - - - - - - - - - - - - - - - - - - - - -
	[ ID] Interval           Transfer     Bandwidth
	[  4]   0.00-10.00  sec  5.48 GBytes  4.71 Gbits/sec                  sender
	[  4]   0.00-10.00  sec  5.48 GBytes  4.71 Gbits/sec                  receiver

iperf3 result with "use prefetch methods" patch:
	[ ID] Interval           Transfer     Bandwidth
	[  4]   0.00-1.00   sec   598 MBytes  5.02 Gbits/sec
	[  4]   1.00-2.00   sec   588 MBytes  4.94 Gbits/sec
	[  4]   2.00-3.00   sec   592 MBytes  4.97 Gbits/sec
	[  4]   3.00-4.00   sec   594 MBytes  4.98 Gbits/sec
	[  4]   4.00-5.00   sec   590 MBytes  4.95 Gbits/sec
	[  4]   5.00-6.00   sec   594 MBytes  4.98 Gbits/sec
	[  4]   6.00-7.00   sec   594 MBytes  4.98 Gbits/sec
	[  4]   7.00-8.00   sec   593 MBytes  4.98 Gbits/sec
	[  4]   8.00-9.00   sec   593 MBytes  4.98 Gbits/sec
	[  4]   9.00-10.00  sec   594 MBytes  4.98 Gbits/sec
	- - - - - - - - - - - - - - - - - - - - - - - - -
	[ ID] Interval           Transfer     Bandwidth
	[  4]   0.00-10.00  sec  5.79 GBytes  4.98 Gbits/sec                  sender
	[  4]   0.00-10.00  sec  5.79 GBytes  4.98 Gbits/sec                  receiver

iperf3 result with "use PP exclusively for XDP programs" patch:
	[ ID] Interval           Transfer     Bandwidth
	[  4]   0.00-1.00   sec   635 MBytes  5.33 Gbits/sec
	[  4]   1.00-2.00   sec   636 MBytes  5.33 Gbits/sec
	[  4]   2.00-3.00   sec   637 MBytes  5.34 Gbits/sec
	[  4]   3.00-4.00   sec   636 MBytes  5.34 Gbits/sec
	[  4]   4.00-5.00   sec   637 MBytes  5.34 Gbits/sec
	[  4]   5.00-6.00   sec   637 MBytes  5.35 Gbits/sec
	[  4]   6.00-7.00   sec   637 MBytes  5.34 Gbits/sec
	[  4]   7.00-8.00   sec   636 MBytes  5.33 Gbits/sec
	[  4]   8.00-9.00   sec   634 MBytes  5.32 Gbits/sec
	[  4]   9.00-10.00  sec   637 MBytes  5.34 Gbits/sec
	- - - - - - - - - - - - - - - - - - - - - - - - -
	[ ID] Interval           Transfer     Bandwidth
	[  4]   0.00-10.00  sec  6.21 GBytes  5.34 Gbits/sec                  sender
	[  4]   0.00-10.00  sec  6.21 GBytes  5.34 Gbits/sec                  receiver

iperf3 result with both patches:
	[ ID] Interval           Transfer     Bandwidth
	[  4]   0.00-1.00   sec   652 MBytes  5.47 Gbits/sec
	[  4]   1.00-2.00   sec   653 MBytes  5.47 Gbits/sec
	[  4]   2.00-3.00   sec   654 MBytes  5.48 Gbits/sec
	[  4]   3.00-4.00   sec   654 MBytes  5.49 Gbits/sec
	[  4]   4.00-5.00   sec   653 MBytes  5.48 Gbits/sec
	[  4]   5.00-6.00   sec   653 MBytes  5.48 Gbits/sec
	[  4]   6.00-7.00   sec   653 MBytes  5.48 Gbits/sec
	[  4]   7.00-8.00   sec   653 MBytes  5.48 Gbits/sec
	[  4]   8.00-9.00   sec   653 MBytes  5.48 Gbits/sec
	[  4]   9.00-10.00  sec   654 MBytes  5.48 Gbits/sec
	- - - - - - - - - - - - - - - - - - - - - - - - -
	[ ID] Interval           Transfer     Bandwidth
	[  4]   0.00-10.00  sec  6.38 GBytes  5.48 Gbits/sec                  sender
	[  4]   0.00-10.00  sec  6.38 GBytes  5.48 Gbits/sec                  receiver

About 16% more packets/sec without XDP program loaded,
and about 5% more packets/sec when using PP.
Tested on Banana Pi BPI-R4 (MT7988A)

---
Technically, this is version 2 of the “use prefetch methods” patch.
Initially, I submitted it as a single patch for review (RFC),
but later I decided to include a second patch, resulting in this series
Changes in v2:
	- Add "use PP exclusively for XDP programs" patch and create this series
---
Elad Yifee (2):
  net: ethernet: mtk_eth_soc: use prefetch methods
  net: ethernet: mtk_eth_soc: use PP exclusively for XDP programs

 drivers/net/ethernet/mediatek/mtk_eth_soc.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

-- 
2.45.2


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ