[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1758532715-820422-3-git-send-email-tariqt@nvidia.com>
Date: Mon, 22 Sep 2025 12:18:35 +0300
From: Tariq Toukan <tariqt@...dia.com>
To: Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Andrew Lunn <andrew+netdev@...n.ch>, "David
S. Miller" <davem@...emloft.net>
CC: Saeed Mahameed <saeedm@...dia.com>, Tariq Toukan <tariqt@...dia.com>,
"Mark Bloch" <mbloch@...dia.com>, Leon Romanovsky <leon@...nel.org>, "Jesper
Dangaard Brouer" <hawk@...nel.org>, Ilias Apalodimas
<ilias.apalodimas@...aro.org>, <netdev@...r.kernel.org>,
<linux-rdma@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Gal Pressman
<gal@...dia.com>, Dragos Tatulea <dtatulea@...dia.com>
Subject: [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max
From: Dragos Tatulea <dtatulea@...dia.com>
When the user configures a large ring size (8K) and a large MTU (9000)
in HW-GRO mode, the queue will fail to allocate due to the size of the
page_pool going above the limit.
This change clamps the pool_size to the limit.
Signed-off-by: Dragos Tatulea <dtatulea@...dia.com>
Signed-off-by: Tariq Toukan <tariqt@...dia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 5e007bb3bad1..e56052895776 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -989,6 +989,8 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
/* Create a page_pool and register it with rxq */
struct page_pool_params pp_params = { 0 };
+ pool_size = min_t(u32, pool_size, PAGE_POOL_SIZE_LIMIT);
+
pp_params.order = 0;
pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
pp_params.pool_size = pool_size;
--
2.31.1
Powered by blists - more mailing lists