[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5d2eef31-8e5a-4831-b050-cdfd65e99e27@gmail.com>
Date: Tue, 30 Sep 2025 10:10:39 +0300
From: Tariq Toukan <ttoukan.linux@...il.com>
To: Dragos Tatulea <dtatulea@...dia.com>, Tariq Toukan <tariqt@...dia.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next v2] page_pool: Clamp pool size to max 16K pages
On 26/09/2025 16:16, Dragos Tatulea wrote:
> page_pool_init() returns E2BIG when the page_pool size goes above 32K
> pages. As some drivers are configuring the page_pool size according to
> the MTU and ring size, there are cases where this limit is exceeded and
> the queue creation fails.
>
> The page_pool size doesn't have to cover a full queue, especially for
> larger ring size. So clamp the size instead of returning an error. Do
> this in the core to avoid having each driver do the clamping.
>
> The current limit was deemed to high [1] so it was reduced to 16K to avoid
> page waste.
>
> [1] https://lore.kernel.org/all/1758532715-820422-3-git-send-email-tariqt@nvidia.com/
>
> Signed-off-by: Dragos Tatulea <dtatulea@...dia.com>
> ---
> Changes since v1 [1]:
> - Switched to clamping in page_pool. (Jakub)
> - Reduced 32K -> 16K limit. (Jakub)
> - Dropped mlx5 patch. (Jakub)
>
> [1] https://lore.kernel.org/all/1758532715-820422-1-git-send-email-tariqt@nvidia.com/
Reviewed-by: Tariq Toukan <tariqt@...dia.com>
Powered by blists - more mailing lists