lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67fe41e5.5d0a0220.1003f3.9737@mx.google.com>
Date: Tue, 15 Apr 2025 13:24:16 +0200
From: Christian Marangi <ansuelsmth@...il.com>
To: "Russell King (Oracle)" <linux@...linux.org.uk>
Cc: Andrew Lunn <andrew+netdev@...n.ch>,
	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
	Rob Herring <robh@...nel.org>,
	Krzysztof Kozlowski <krzk+dt@...nel.org>,
	Conor Dooley <conor+dt@...nel.org>,
	Heiner Kallweit <hkallweit1@...il.com>,
	Florian Fainelli <florian.fainelli@...adcom.com>,
	Broadcom internal kernel review list <bcm-kernel-feedback-list@...adcom.com>,
	Marek BehĂșn <kabel@...nel.org>,
	Andrei Botila <andrei.botila@....nxp.com>,
	Sabrina Dubroca <sd@...asysnail.net>,
	Eric Woudstra <ericwouds@...il.com>,
	Daniel Golle <daniel@...rotopia.org>, netdev@...r.kernel.org,
	devicetree@...r.kernel.org, linux-kernel@...r.kernel.or
Subject: Re: [net-next PATCH v7 5/6] net: phy: Add support for Aeonsemi
 AS21xxx PHYs

On Tue, Apr 15, 2025 at 10:37:49AM +0100, Russell King (Oracle) wrote:
> On Thu, Apr 10, 2025 at 11:53:35AM +0200, Christian Marangi wrote:
> > +#define VEND1_IPC_DATA0			0x5808
> > +#define VEND1_IPC_DATA1			0x5809
> > +#define VEND1_IPC_DATA2			0x580a
> > +#define VEND1_IPC_DATA3			0x580b
> > +#define VEND1_IPC_DATA4			0x580c
> > +#define VEND1_IPC_DATA5			0x580d
> > +#define VEND1_IPC_DATA6			0x580e
> > +#define VEND1_IPC_DATA7			0x580f
> 
> Do you use any of these other than the first? If not, please remove
> them, maybe adding a comment before the first stating that there are
> 8 registers.
>

No since the macro is used. Also from Andrew request I will declare the
max number of IPC data register so yes I can drop.

> > +static int aeon_ipcs_wait_cmd(struct phy_device *phydev, bool parity_status)
> > +{
> > +	u16 val;
> > +
> > +	/* Exit condition logic:
> > +	 * - Wait for parity bit equal
> > +	 * - Wait for status success, error OR ready
> > +	 */
> > +	return phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1, VEND1_IPC_STS, val,
> > +					 FIELD_GET(AEON_IPC_STS_PARITY, val) == parity_status &&
> > +					 (val & AEON_IPC_STS_STATUS) != AEON_IPC_STS_STATUS_RCVD &&
> > +					 (val & AEON_IPC_STS_STATUS) != AEON_IPC_STS_STATUS_PROCESS &&
> > +					 (val & AEON_IPC_STS_STATUS) != AEON_IPC_STS_STATUS_BUSY,
> > +					 AEON_IPC_DELAY, AEON_IPC_TIMEOUT, false);
> 
> Hmm. I'm wondering whether:
> 
> static bool aeon_ipc_ready(u16 val, bool parity_status)
> {
> 	u16 status;
> 
> 	if (FIELD_GET(AEON_IPC_STS_PARITY, val) != parity_status)
> 		return false;
> 
> 	status = val & AEON_IPC_STS_STATUS;
> 
> 	return status != AEON_IPC_STS_STATUS_RCVD &&
> 	       status != AEON_IPC_STS_STATUS_PROCESS &&
> 	       status != AEON_IPC_STS_STATUS_BUSY;
> }
> 
> would be better, and then maybe you can fit the code into less than 80
> columns. I'm not a fan of FIELD_PREP_CONST() when it causes differing
> usage patterns like the above (FIELD_GET(AEON_IPC_STS_STATUS, val)
> would match the coding style, and probably makes no difference to the
> code emitted.)
> 

You are suggesting to use a generic readx function or use a while +
sleep to use the suggested _ready function?

> > +}
> > +
> > +static int aeon_ipc_send_cmd(struct phy_device *phydev,
> > +			     struct as21xxx_priv *priv,
> > +			     u16 cmd, u16 *ret_sts)
> > +{
> > +	bool curr_parity;
> > +	int ret;
> > +
> > +	/* The IPC sync by using a single parity bit.
> > +	 * Each CMD have alternately this bit set or clear
> > +	 * to understand correct flow and packet order.
> > +	 */
> > +	curr_parity = priv->parity_status;
> > +	if (priv->parity_status)
> > +		cmd |= AEON_IPC_CMD_PARITY;
> > +
> > +	/* Always update parity for next packet */
> > +	priv->parity_status = !priv->parity_status;
> > +
> > +	ret = phy_write_mmd(phydev, MDIO_MMD_VEND1, VEND1_IPC_CMD, cmd);
> > +	if (ret)
> > +		return ret;
> > +
> > +	/* Wait for packet to be processed */
> > +	usleep_range(AEON_IPC_DELAY, AEON_IPC_DELAY + 5000);
> > +
> > +	/* With no ret_sts, ignore waiting for packet completion
> > +	 * (ipc parity bit sync)
> > +	 */
> > +	if (!ret_sts)
> > +		return 0;
> > +
> > +	ret = aeon_ipcs_wait_cmd(phydev, curr_parity);
> > +	if (ret)
> > +		return ret;
> > +
> > +	ret = phy_read_mmd(phydev, MDIO_MMD_VEND1, VEND1_IPC_STS);
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	*ret_sts = ret;
> > +	if ((*ret_sts & AEON_IPC_STS_STATUS) != AEON_IPC_STS_STATUS_SUCCESS)
> > +		return -EINVAL;
> > +
> > +	return 0;
> > +}
> > +
> > +static int aeon_ipc_send_msg(struct phy_device *phydev,
> > +			     u16 opcode, u16 *data, unsigned int data_len,
> > +			     u16 *ret_sts)
> > +{
> > +	struct as21xxx_priv *priv = phydev->priv;
> > +	u16 cmd;
> > +	int ret;
> > +	int i;
> > +
> > +	/* IPC have a max of 8 register to transfer data,
> > +	 * make sure we never exceed this.
> > +	 */
> > +	if (data_len > AEON_IPC_DATA_MAX)
> > +		return -EINVAL;
> > +
> > +	mutex_lock(&priv->ipc_lock);
> > +
> > +	for (i = 0; i < data_len / sizeof(u16); i++)
> > +		phy_write_mmd(phydev, MDIO_MMD_VEND1, VEND1_IPC_DATA(i),
> > +			      data[i]);
> > +
> > +	cmd = FIELD_PREP(AEON_IPC_CMD_SIZE, data_len) |
> > +	      FIELD_PREP(AEON_IPC_CMD_OPCODE, opcode);
> > +	ret = aeon_ipc_send_cmd(phydev, priv, cmd, ret_sts);
> > +	if (ret)
> > +		phydev_err(phydev, "failed to send ipc msg for %x: %d\n",
> > +			   opcode, ret);
> > +
> > +	mutex_unlock(&priv->ipc_lock);
> > +
> > +	return ret;
> > +}
> > +
> > +static int aeon_ipc_rcv_msg(struct phy_device *phydev,
> > +			    u16 ret_sts, u16 *data)
> > +{
> > +	struct as21xxx_priv *priv = phydev->priv;
> > +	unsigned int size;
> > +	int ret;
> > +	int i;
> > +
> > +	if ((ret_sts & AEON_IPC_STS_STATUS) == AEON_IPC_STS_STATUS_ERROR)
> > +		return -EINVAL;
> > +
> > +	/* Prevent IPC from stack smashing the kernel */
> > +	size = FIELD_GET(AEON_IPC_STS_SIZE, ret_sts);
> > +	if (size > AEON_IPC_DATA_MAX)
> > +		return -EINVAL;
> > +
> > +	mutex_lock(&priv->ipc_lock);
> > +
> > +	for (i = 0; i < DIV_ROUND_UP(size, sizeof(u16)); i++) {
> > +		ret = phy_read_mmd(phydev, MDIO_MMD_VEND1, VEND1_IPC_DATA(i));
> > +		if (ret < 0) {
> > +			size = ret;
> > +			goto out;
> > +		}
> > +
> > +		data[i] = ret;
> > +	}
> > +
> > +out:
> > +	mutex_unlock(&priv->ipc_lock);
> 
> I think the locking here is suspicious.
> 
> aeon_ipc_send_msg() takes the lock, writes the registers, and waits for
> the success signal, returning the status, and then drops the lock.
> 
> Time passes.
> 
> aeon_ipc_rcv_msg() is then called, which extracts the number of bytes to
> be read from the returned status, takes the lock, reads the registers,
> and then drops the lock.
> 
> So, what can happen is:
> 
> 	Thread 1			Thread 2
> 	aeon_ipc_send_msg()
> 					aeon_ipc_send_msg()
> 	aeon_ipc_rcv_msg()
> 					aeon_ipc_rcv_msg()
> 
> Which means thread 1 reads the reply from thread 2's message.
> 
> So, this lock does nothing to really protect against the IPC mechanism
> against races.
> 
> I think you need to combine both into a single function that takes the
> lock once and releases it once.
>

Mhhh I think I will have to create __ function for locked and non-locked
variant. I think I woulkd just handle the lock in the function using
send and rcv and maybe add some smatch tag to make sure the lock is
taken when entering those functions.

-- 
	Ansuel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ