[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<PAXPR04MB851071D8B6E7F1FBCBA656CA88C9A@PAXPR04MB8510.eurprd04.prod.outlook.com>
Date: Mon, 17 Nov 2025 13:54:20 +0000
From: Wei Fang <wei.fang@....com>
To: Andrew Lunn <andrew@...n.ch>
CC: "linux@...linux.org.uk" <linux@...linux.org.uk>, "hkallweit1@...il.com"
<hkallweit1@...il.com>, "davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>, "kuba@...nel.org"
<kuba@...nel.org>, "pabeni@...hat.com" <pabeni@...hat.com>, "eric@...int.com"
<eric@...int.com>, "maxime.chevallier@...tlin.com"
<maxime.chevallier@...tlin.com>, "imx@...ts.linux.dev" <imx@...ts.linux.dev>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v2] net: phylink: add missing supported link modes for the
fixed-link
> With 1ms ping times, you don't have buffer bloat problems, so that is
> not the issue.
>
> I still think you need to look at this some more. Both Russells
> comments about it potentially blocking traffic for other ports, and
> why TCP is doing so bad, maybe gets some traffic dumps and ask the TCP
> experts.
>
The default TCP block size sent by iperf is 128KB. ENETC then fragments
the packet via TSO and sends the fragments to the switch. Because the
link speed between ENETC and the switch CPU port is 2.5Gbps, it takes
approximately 420 us for the TCP block to be sent to the switch. However,
the link speed of the switch's user port is only 1Gbps, and the switch takes
approximately 1050 us to send the packets out. Therefore, packets
accumulate within the switch. Without flow control enabled, this can
exhaust the switch's buffer, eventually leading to congestion.
Debugging results from the switch show that many packets are being
dropped on the CPU port, and the reason of packet loss is precisely
due to congestion.
Powered by blists - more mailing lists