lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250307180006.GK3666230@kernel.org>
Date: Fri, 7 Mar 2025 18:00:06 +0000
From: Simon Horman <horms@...nel.org>
To: Satish Kharat <satishkh@...co.com>
Cc: Christian Benvenuti <benve@...co.com>,
	Andrew Lunn <andrew+netdev@...n.ch>,
	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	Nelson Escobar <neescoba@...co.com>,
	John Daley <johndale@...co.com>
Subject: Re: [PATCH net-next v3 4/8] enic: enable rq extended cq support

On Thu, Mar 06, 2025 at 07:15:25PM -0500, Satish Kharat via B4 Relay wrote:
> From: Satish Kharat <satishkh@...co.com>
> 
> Enables getting from hw all the supported rq cq sizes and
> uses the highest supported cq size.
> 
> Co-developed-by: Nelson Escobar <neescoba@...co.com>
> Signed-off-by: Nelson Escobar <neescoba@...co.com>
> Co-developed-by: John Daley <johndale@...co.com>
> Signed-off-by: John Daley <johndale@...co.com>
> Signed-off-by: Satish Kharat <satishkh@...co.com>

...

> diff --git a/drivers/net/ethernet/cisco/enic/enic_rq.c b/drivers/net/ethernet/cisco/enic/enic_rq.c
> index 842b273c2e2a59e81a7c1423449b023d646f5e81..ccbf5c9a21d0ffe33c7c74042d5425497ea0f9dc 100644
> --- a/drivers/net/ethernet/cisco/enic/enic_rq.c
> +++ b/drivers/net/ethernet/cisco/enic/enic_rq.c
> @@ -21,24 +21,76 @@ static void enic_intr_update_pkt_size(struct vnic_rx_bytes_counter *pkt_size,
>  		pkt_size->small_pkt_bytes_cnt += pkt_len;
>  }
>  
> -static void enic_rq_cq_desc_dec(struct cq_enet_rq_desc *desc, u8 *type,
> +static void enic_rq_cq_desc_dec(void *cq_desc, u8 cq_desc_size, u8 *type,
>  				u8 *color, u16 *q_number, u16 *completed_index)
>  {
>  	/* type_color is the last field for all cq structs */
> -	u8 type_color = desc->type_color;
> +	u8 type_color;
> +
> +	switch (cq_desc_size) {
> +	case VNIC_RQ_CQ_ENTRY_SIZE_16: {
> +		struct cq_enet_rq_desc *desc =
> +			(struct cq_enet_rq_desc *)cq_desc;
> +		type_color = desc->type_color;
> +
> +		/* Make sure color bit is read from desc *before* other fields
> +		 * are read from desc.  Hardware guarantees color bit is last
> +		 * bit (byte) written.  Adding the rmb() prevents the compiler
> +		 * and/or CPU from reordering the reads which would potentially
> +		 * result in reading stale values.
> +		 */
> +		rmb();
>  
> -	/* Make sure color bit is read from desc *before* other fields
> -	 * are read from desc.  Hardware guarantees color bit is last
> -	 * bit (byte) written.  Adding the rmb() prevents the compiler
> -	 * and/or CPU from reordering the reads which would potentially
> -	 * result in reading stale values.
> -	 */
> -	rmb();
> +		*q_number = le16_to_cpu(desc->q_number_rss_type_flags) &
> +			    CQ_DESC_Q_NUM_MASK;
> +		*completed_index = le16_to_cpu(desc->completed_index_flags) &
> +				   CQ_DESC_COMP_NDX_MASK;
> +		break;
> +	}
> +	case VNIC_RQ_CQ_ENTRY_SIZE_32: {
> +		struct cq_enet_rq_desc_32 *desc =
> +			(struct cq_enet_rq_desc_32 *)cq_desc;
> +		type_color = desc->type_color;
> +
> +		/* Make sure color bit is read from desc *before* other fields
> +		 * are read from desc.  Hardware guarantees color bit is last
> +		 * bit (byte) written.  Adding the rmb() prevents the compiler
> +		 * and/or CPU from reordering the reads which would potentially
> +		 * result in reading stale values.
> +		 */
> +		rmb();
> +
> +		*q_number = le16_to_cpu(desc->q_number_rss_type_flags) &
> +			    CQ_DESC_Q_NUM_MASK;
> +		*completed_index = le16_to_cpu(desc->completed_index_flags) &
> +				   CQ_DESC_COMP_NDX_MASK;
> +		*completed_index |= (desc->fetch_index_flags & CQ_DESC_32_FI_MASK) <<
> +				CQ_DESC_COMP_NDX_BITS;
> +		break;
> +	}
> +	case VNIC_RQ_CQ_ENTRY_SIZE_64: {
> +		struct cq_enet_rq_desc_64 *desc =
> +			(struct cq_enet_rq_desc_64 *)cq_desc;
> +		type_color = desc->type_color;
> +
> +		/* Make sure color bit is read from desc *before* other fields
> +		 * are read from desc.  Hardware guarantees color bit is last
> +		 * bit (byte) written.  Adding the rmb() prevents the compiler
> +		 * and/or CPU from reordering the reads which would potentially
> +		 * result in reading stale values.
> +		 */
> +		rmb();
> +
> +		*q_number = le16_to_cpu(desc->q_number_rss_type_flags) &
> +			    CQ_DESC_Q_NUM_MASK;
> +		*completed_index = le16_to_cpu(desc->completed_index_flags) &
> +				   CQ_DESC_COMP_NDX_MASK;
> +		*completed_index |= (desc->fetch_index_flags & CQ_DESC_64_FI_MASK) <<
> +				CQ_DESC_COMP_NDX_BITS;
> +		break;
> +	}
> +	}
>  
> -	*q_number = le16_to_cpu(desc->q_number_rss_type_flags) &
> -		CQ_DESC_Q_NUM_MASK;
> -	*completed_index = le16_to_cpu(desc->completed_index_flags) &
> -	CQ_DESC_COMP_NDX_MASK;
>  	*color = (type_color >> CQ_DESC_COLOR_SHIFT) & CQ_DESC_COLOR_MASK;
>  	*type = type_color & CQ_DESC_TYPE_MASK;

Hi Satish, all,

I'm unsure if this can occur in practice, but it seems that if
none of the cases above are met then type_color will be used
uninitialised here.

Flagged by Smatch.

>  }

...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ