[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1739763746-28637-1-git-send-email-shradhagupta@linux.microsoft.com>
Date: Sun, 16 Feb 2025 19:42:26 -0800
From: Shradha Gupta <shradhagupta@...ux.microsoft.com>
To: linux-hyperv@...r.kernel.org,
netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: Shradha Gupta <shradhagupta@...ux.microsoft.com>,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Wei Liu <wei.liu@...nel.org>,
Dexuan Cui <decui@...rosoft.com>,
Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Long Li <longli@...rosoft.com>,
Konstantin Taranov <kotaranov@...rosoft.com>,
Souradeep Chakrabarti <schakrabarti@...ux.microsoft.com>,
Erick Archer <erick.archer@...look.com>,
Shradha Gupta <shradhagupta@...rosoft.com>
Subject: [PATCH v3 net-next 1/2] net: mana: Allow tso_max_size to go up-to GSO_MAX_SIZE
Allow the max aggregated pkt size to go up-to GSO_MAX_SIZE for MANA NIC.
This patch only increases the max allowable gso/gro pkt size for MANA
devices and does not change the defaults.
Following are the perf benefits by increasing the pkt aggregate size from
legacy gso_max_size value(64K) to newer one(up-to 511K
IPv4 tests
for i in {1..10}; do netperf -t TCP_RR -H 10.0.0.5 -p50000 -- -r80000,80000
-O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
min p90 p99 Throughput gso_max_size
93 171 194 6594.25
97 154 180 7183.74
95 165 189 6927.86
96 165 188 6976.04
93 154 185 7338.05 64K
93 168 189 6938.03
94 169 189 6784.93
92 166 189 7117.56
94 179 191 6678.44
95 157 183 7277.81
min p90 p99 Throughput
93 134 146 8448.75
95 134 140 8396.54
94 137 148 8204.12
94 137 148 8244.41
94 128 139 8666.52 80K
94 141 153 8116.86
94 138 149 8163.92
92 135 142 8362.72
92 134 142 8497.57
93 136 148 8393.23
IPv6 Tests
for i in {1..10}; do netperf -t TCP_RR -H fd00:9013:cadd::4 -p50000 --
-r80000,80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
min p90 p99 Throughput gso_max_size
108 165 170 6673.2
101 169 189 6451.69
101 165 169 6737.65
102 167 175 6614.64
101 178 189 6247.13 64K
107 163 169 6678.63
106 176 187 6350.86
100 164 169 6617.36
102 163 170 6849.21
102 168 175 6605.7
min p90 p99 Throughput
108 155 166 7183
110 154 163 7268.87
109 152 159 7434.35
107 145 157 7569.15
107 149 164 7496.17 80K
110 154 159 7245.85
108 156 162 7266.24
109 145 158 7526.66
106 145 151 7785.75
111 148 157 7246.65
Tested on azure env with Accelerated Networking enabled and disabled.
Signed-off-by: Shradha Gupta <shradhagupta@...ux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@...rosoft.com>
---
Changes in v3
* Add ipv6_hopopt_jumbo_remove() while sending Big TCP packets
---
Changes in v2
* Instead of using 'tcp segment' throughout the patch used more accurate
term 'aggregated pkt size'
---
drivers/net/ethernet/microsoft/mana/mana_en.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
index aa1e47233fe5..3b0fb4d95cf7 100644
--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
+++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
@@ -256,6 +256,9 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev)
if (skb_cow_head(skb, MANA_HEADROOM))
goto tx_drop_count;
+ if (unlikely(ipv6_hopopt_jumbo_remove(skb)))
+ goto tx_drop_count;
+
txq = &apc->tx_qp[txq_idx].txq;
gdma_sq = txq->gdma_sq;
cq = &apc->tx_qp[txq_idx].tx_cq;
@@ -2873,6 +2876,8 @@ static int mana_probe_port(struct mana_context *ac, int port_idx,
ndev->dev_port = port_idx;
SET_NETDEV_DEV(ndev, gc->dev);
+ netif_set_tso_max_size(ndev, GSO_MAX_SIZE);
+
netif_carrier_off(ndev);
netdev_rss_key_fill(apc->hashkey, MANA_HASH_KEY_SIZE);
--
2.34.1
Powered by blists - more mailing lists