lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cfa6e78d-82ca-43d2-a8df-48fcb7d6301e@gmail.com>
Date: Wed, 14 Jan 2026 10:23:21 +0200
From: Tariq Toukan <ttoukan.linux@...il.com>
To: Leon Hwang <leon.hwang@...ux.dev>, netdev@...r.kernel.org
Cc: Saeed Mahameed <saeedm@...dia.com>, Tariq Toukan <tariqt@...dia.com>,
 Mark Bloch <mbloch@...dia.com>, Leon Romanovsky <leon@...nel.org>,
 Andrew Lunn <andrew+netdev@...n.ch>, "David S . Miller"
 <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
 Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
 Oz Shlomo <ozsh@...lanox.com>, Paul Blakey <paulb@...lanox.com>,
 Khalid Manaa <khalidm@...dia.com>, Achiad Shochat <achiad@...lanox.com>,
 Jiayuan Chen <jiayuan.chen@...ux.dev>, linux-rdma@...r.kernel.org,
 linux-kernel@...r.kernel.org, Leon Huang Fu <leon.huangfu@...pee.com>
Subject: Re: [PATCH net-next] net/mlx5e: Mask wqe_id when handling rx cqe



On 12/01/2026 10:03, Leon Hwang wrote:
> The wqe_id from CQE contains wrap counter bits in addition to the WQE
> index. Mask it with sz_m1 to prevent out-of-bounds access to the
> rq->mpwqe.info[] array when wrap counter causes wqe_id to exceed RQ size.
> 
> Without this fix, the driver crashes with NULL pointer dereference:
> 
>    BUG: kernel NULL pointer dereference, address: 0000000000000020
>    RIP: 0010:mlx5e_skb_from_cqe_mpwrq_linear+0xb3/0x280 [mlx5_core]
>    Call Trace:
>     <IRQ>
>     mlx5e_handle_rx_cqe_mpwrq+0xe3/0x290 [mlx5_core]
>     mlx5e_poll_rx_cq+0x97/0x820 [mlx5_core]
>     mlx5e_napi_poll+0x110/0x820 [mlx5_core]
> 

Hi,

We do not expect out-of-bounds index, fixing it this way is not 
necessarily correct.

Can you please elaborate on your test case, setup, and how to repro?

> Fixes: dfd9e7500cd4 ("net/mlx5e: Rx, Split rep rx mpwqe handler from nic")
> Fixes: f97d5c2a453e ("net/mlx5e: Add handle SHAMPO cqe support")
> Fixes: 461017cb006a ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
> Signed-off-by: Leon Huang Fu <leon.huangfu@...pee.com>
> Signed-off-by: Leon Hwang <leon.hwang@...ux.dev>
> ---
>   drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h | 5 +++++
>   drivers/net/ethernet/mellanox/mlx5/core/en_rx.c   | 6 +++---
>   2 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
> index 7e191e1569e8..df8e671d5115 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
> @@ -583,4 +583,9 @@ static inline struct mlx5e_mpw_info *mlx5e_get_mpw_info(struct mlx5e_rq *rq, int
>   
>   	return (struct mlx5e_mpw_info *)((char *)rq->mpwqe.info + array_size(i, isz));
>   }
> +
> +static inline u16 mlx5e_rq_cqe_wqe_id(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
> +{
> +	return be16_to_cpu(cqe->wqe_id) & rq->mpwqe.wq.fbc.sz_m1;
> +}
>   #endif
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> index 1f6930c77437..25c04684271c 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> @@ -1957,7 +1957,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
>   static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
>   {
>   	u16 cstrides       = mpwrq_get_cqe_consumed_strides(cqe);
> -	u16 wqe_id         = be16_to_cpu(cqe->wqe_id);
> +	u16 wqe_id         = mlx5e_rq_cqe_wqe_id(rq, cqe);
>   	struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, wqe_id);
>   	u16 stride_ix      = mpwrq_get_cqe_stride_index(cqe);
>   	u32 wqe_offset     = stride_ix << rq->mpwqe.log_stride_sz;
> @@ -2373,7 +2373,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq
>   	u16 cstrides		= mpwrq_get_cqe_consumed_strides(cqe);
>   	u32 data_offset		= wqe_offset & (PAGE_SIZE - 1);
>   	u32 cqe_bcnt		= mpwrq_get_cqe_byte_cnt(cqe);
> -	u16 wqe_id		= be16_to_cpu(cqe->wqe_id);
> +	u16 wqe_id		= mlx5e_rq_cqe_wqe_id(rq, cqe);
>   	u32 page_idx		= wqe_offset >> PAGE_SHIFT;
>   	u16 head_size		= cqe->shampo.header_size;
>   	struct sk_buff **skb	= &rq->hw_gro_data->skb;
> @@ -2478,7 +2478,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq
>   static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
>   {
>   	u16 cstrides       = mpwrq_get_cqe_consumed_strides(cqe);
> -	u16 wqe_id         = be16_to_cpu(cqe->wqe_id);
> +	u16 wqe_id         = mlx5e_rq_cqe_wqe_id(rq, cqe);
>   	struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, wqe_id);
>   	u16 stride_ix      = mpwrq_get_cqe_stride_index(cqe);
>   	u32 wqe_offset     = stride_ix << rq->mpwqe.log_stride_sz;


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ