[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHS8izOEn+C5QexSPZT3_ekUr2zR1dm9R6OsoGBPaqg5MFvBRQ@mail.gmail.com>
Date: Wed, 11 Jun 2025 22:16:18 -0700
From: Mina Almasry <almasrymina@...gle.com>
To: Mark Bloch <mbloch@...dia.com>
Cc: "David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Eric Dumazet <edumazet@...gle.com>,
Andrew Lunn <andrew+netdev@...n.ch>, saeedm@...dia.com, gal@...dia.com,
leonro@...dia.com, tariqt@...dia.com, Leon Romanovsky <leon@...nel.org>,
netdev@...r.kernel.org, linux-rdma@...r.kernel.org,
linux-kernel@...r.kernel.org, Dragos Tatulea <dtatulea@...dia.com>,
Cosmin Ratiu <cratiu@...dia.com>
Subject: Re: [PATCH net-next v4 08/11] net/mlx5e: Add support for UNREADABLE
netmem page pools
On Tue, Jun 10, 2025 at 8:20 AM Mark Bloch <mbloch@...dia.com> wrote:
>
> From: Saeed Mahameed <saeedm@...dia.com>
>
> On netdev_rx_queue_restart, a special type of page pool maybe expected.
>
> In this patch declare support for UNREADABLE netmem iov pages in the
> pool params only when header data split shampo RQ mode is enabled, also
> set the queue index in the page pool params struct.
>
> Shampo mode requirement: Without header split rx needs to peek at the data,
> we can't do UNREADABLE_NETMEM.
>
> The patch also enables the use of a separate page pool for headers when
> a memory provider is installed for the queue, otherwise the same common
> page pool continues to be used.
>
> Signed-off-by: Saeed Mahameed <saeedm@...dia.com>
> Reviewed-by: Dragos Tatulea <dtatulea@...dia.com>
> Signed-off-by: Cosmin Ratiu <cratiu@...dia.com>
> Signed-off-by: Tariq Toukan <tariqt@...dia.com>
> Signed-off-by: Mark Bloch <mbloch@...dia.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> index 5e649705e35f..a51e204bd364 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> @@ -749,7 +749,9 @@ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq)
>
> static bool mlx5_rq_needs_separate_hd_pool(struct mlx5e_rq *rq)
> {
> - return false;
> + struct netdev_rx_queue *rxq = __netif_get_rx_queue(rq->netdev, rq->ix);
> +
> + return !!rxq->mp_params.mp_ops;
This is kinda assuming that all future memory providers will return
unreadable memory, which is not a restriction I have in mind... in
theory there is nothing wrong with memory providers that feed readable
pages. Technically the right thing to do here is to define a new
helper page_pool_is_readable() and have the mp report to the pp if
it's all readable or not.
But all this sounds like a huge hassle for an unnecessary amount of
future proofing, so I guess this is fine.
Reviewed-by: Mina Almasry <almasrymina@...gle.com>
--
Thanks,
Mina
Powered by blists - more mailing lists