lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
 <PAXPR04MB8510D5CADBA4DA2F4AE0667388D6A@PAXPR04MB8510.eurprd04.prod.outlook.com>
Date: Tue, 18 Nov 2025 06:20:04 +0000
From: Wei Fang <wei.fang@....com>
To: Andrew Lunn <andrew@...n.ch>
CC: "linux@...linux.org.uk" <linux@...linux.org.uk>, "hkallweit1@...il.com"
	<hkallweit1@...il.com>, "davem@...emloft.net" <davem@...emloft.net>,
	"edumazet@...gle.com" <edumazet@...gle.com>, "kuba@...nel.org"
	<kuba@...nel.org>, "pabeni@...hat.com" <pabeni@...hat.com>, "eric@...int.com"
	<eric@...int.com>, "maxime.chevallier@...tlin.com"
	<maxime.chevallier@...tlin.com>, "imx@...ts.linux.dev" <imx@...ts.linux.dev>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v2] net: phylink: add missing supported link modes for the
 fixed-link

> > The default TCP block size sent by iperf is 128KB. ENETC then fragments
> > the packet via TSO and sends the fragments to the switch. Because the
> > link speed between ENETC and the switch CPU port is 2.5Gbps, it takes
> > approximately 420 us for the TCP block to be sent to the switch. However,
> > the link speed of the switch's user port is only 1Gbps, and the switch takes
> > approximately 1050 us to send the packets out. Therefore, packets
> > accumulate within the switch. Without flow control enabled, this can
> > exhaust the switch's buffer, eventually leading to congestion.
> >
> > Debugging results from the switch show that many packets are being
> > dropped on the CPU port, and the reason of packet loss is precisely
> > due to congestion.
> 
> BQL might help you. It could break the 128KB burst up into a number of
> smaller bursts, helping avoid the congestion.
> 

Thanks, based on your suggestion, I added BQL to the enetc driver,
but the issue still exists. The buffer pool of the CPU port has very
limited buffers, so it is easy to be congested. I removed the maximum
threshold limit for the buffer pool, allowing it to use the entire switch's
buffer at most. I can see that the TCP performance meets expectations,
but there are still some TCP retransmissions.

> Are there any parameters you can change with TSO? This is one
> stream. Sending it out at 2.5G line rate makes no sense when you know
> it is going to hit a 1G egress. You might as well limit TSO to
> 1G. That will help single stream traffic. If you have multiple streams
> it will not help, you will still hit congestion, assuming you can do
> multiple TSOs in parallel.
> 

For ENETC, TSO is mainly performed by hardware. The driver simply puts
the entire TCP block on the TX ring and informs the hardware to perform
TSO through BD. Therefore, we cannot adjust the TSO parameters to
reduce the transmission rate from ENETC to the CPU port.

For ENETC, it's not possible to set the egress rate for each stream based
on its egress port. The hardware does not support this functionality. What
we can set is the port speed of the ENETC, which is a global configuration.
And some user ports of the NETC switch are SGMII interface, the link speed
is 2.5Gbps, so we cannot set the ENETC port speed to 1Gbps. Currently,
enabling flow control for this internal link appears to be the best solution.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ