lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: 
 <176279049361.2130772.7535770079337105870.stgit@ahduyck-xeon-server.home.arpa>
Date: Mon, 10 Nov 2025 08:01:33 -0800
From: Alexander Duyck <alexander.duyck@...il.com>
To: netdev@...r.kernel.org
Cc: kuba@...nel.org, kernel-team@...a.com, andrew+netdev@...n.ch,
 hkallweit1@...il.com, linux@...linux.org.uk, pabeni@...hat.com,
 davem@...emloft.net
Subject: [net-next PATCH v3 05/10] net: pcs: xpcs: Add support for FBNIC 25G,
 50G, 100G PMA

From: Alexander Duyck <alexanderduyck@...com>

The fbnic driver is planning to make use of the XPCS driver to enable
support for PCS and better integration with phylink. To do this though we
will need to enable several workarounds since the PMA/PMD interface for
fbnic is likely to be unique since it is a mix of two different vendor
products with a unique wrapper around the IP.

As such I have generated a PHY identifier based on IEEE 802.3-2022
22.2.4.3.1 using the OUI belonging to Meta Platforms and used with our
NICs. Using this we will provide it as the PHY ID via the SW based MDIO
interface so that the fbnic device can be identified and necessary
workarounds enabled in the XPCS driver.

As an initial workaround this change adds an exception so that soft_reset
is not set when the driver is initially bound to the PCS.

In addition I have added logic to integrate the PMA link state into the
link state for the PCS. With this we can avoid the link coming up too soon
on the FBNIC PHY and as a result we can avoid link flaps.

Signed-off-by: Alexander Duyck <alexanderduyck@...com>
---
 drivers/net/pcs/pcs-xpcs.c   |   23 +++++++++++++++++++++--
 include/linux/pcs/pcs-xpcs.h |    2 ++
 2 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/drivers/net/pcs/pcs-xpcs.c b/drivers/net/pcs/pcs-xpcs.c
index 8b5b5b63b74b..69a6c03fd9e7 100644
--- a/drivers/net/pcs/pcs-xpcs.c
+++ b/drivers/net/pcs/pcs-xpcs.c
@@ -597,7 +597,25 @@ static int xpcs_c45_read_pcs_speed(struct dw_xpcs *xpcs,
 static int xpcs_resolve_pma(struct dw_xpcs *xpcs,
 			    struct phylink_link_state *state)
 {
-	int err = 0;
+	int pma_stat1, err = 0;
+
+	/* The Meta Platforms FBNIC PMD will go into a training state for
+	 * about 4 seconds when the link first comes up. During this time the
+	 * PCS link will bounce. To avoid reporting link up too soon we include
+	 * the PMA/PMD state provided by the driver.
+	 */
+	if (xpcs->info.pma == MP_FBNIC_XPCS_PMA_100G_ID) {
+		pma_stat1 = xpcs_read(xpcs, MDIO_MMD_PMAPMD, MDIO_STAT1);
+		if (pma_stat1 < 0) {
+			state->link = false;
+			return pma_stat1;
+		}
+
+		if (!(pma_stat1 & MDIO_STAT1_LSTATUS)) {
+			state->link = false;
+			return 0;
+		}
+	}
 
 	state->pause = MLO_PAUSE_TX | MLO_PAUSE_RX;
 	state->duplex = DUPLEX_FULL;
@@ -1591,7 +1609,8 @@ static struct dw_xpcs *xpcs_create(struct mdio_device *mdiodev)
 
 	xpcs_get_interfaces(xpcs, xpcs->pcs.supported_interfaces);
 
-	if (xpcs->info.pma == WX_TXGBE_XPCS_PMA_10G_ID)
+	if (xpcs->info.pma == WX_TXGBE_XPCS_PMA_10G_ID ||
+	    xpcs->info.pma == MP_FBNIC_XPCS_PMA_100G_ID)
 		xpcs->pcs.poll = false;
 	else
 		xpcs->need_reset = true;
diff --git a/include/linux/pcs/pcs-xpcs.h b/include/linux/pcs/pcs-xpcs.h
index 4cf6bd611e5a..36073f7b6bb4 100644
--- a/include/linux/pcs/pcs-xpcs.h
+++ b/include/linux/pcs/pcs-xpcs.h
@@ -39,6 +39,8 @@ enum dw_xpcs_pma_id {
 	DW_XPCS_PMA_GEN5_10G_ID,
 	DW_XPCS_PMA_GEN5_12G_ID,
 	WX_TXGBE_XPCS_PMA_10G_ID = 0xfc806000,
+	/* Meta Platforms OUI 88:25:08, model 0, revision 0 */
+	MP_FBNIC_XPCS_PMA_100G_ID = 0x46904000,
 };
 
 struct dw_xpcs_info {



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ