[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHS8izNyFtcWd0wPGoCdZXtZkjqWk6VgLAyk4anfCQjGP2uk-w@mail.gmail.com>
Date: Thu, 12 Jun 2025 13:47:24 -0700
From: Mina Almasry <almasrymina@...gle.com>
To: Dragos Tatulea <dtatulea@...dia.com>
Cc: Mark Bloch <mbloch@...dia.com>, "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Eric Dumazet <edumazet@...gle.com>,
Andrew Lunn <andrew+netdev@...n.ch>, saeedm@...dia.com, gal@...dia.com,
leonro@...dia.com, tariqt@...dia.com, Leon Romanovsky <leon@...nel.org>,
netdev@...r.kernel.org, linux-rdma@...r.kernel.org,
linux-kernel@...r.kernel.org, Cosmin Ratiu <cratiu@...dia.com>
Subject: Re: [PATCH net-next v4 08/11] net/mlx5e: Add support for UNREADABLE
netmem page pools
On Thu, Jun 12, 2025 at 1:46 AM Dragos Tatulea <dtatulea@...dia.com> wrote:
>
> On Wed, Jun 11, 2025 at 10:16:18PM -0700, Mina Almasry wrote:
> > On Tue, Jun 10, 2025 at 8:20 AM Mark Bloch <mbloch@...dia.com> wrote:
> > >
> > > From: Saeed Mahameed <saeedm@...dia.com>
> > >
> > > On netdev_rx_queue_restart, a special type of page pool maybe expected.
> > >
> > > In this patch declare support for UNREADABLE netmem iov pages in the
> > > pool params only when header data split shampo RQ mode is enabled, also
> > > set the queue index in the page pool params struct.
> > >
> > > Shampo mode requirement: Without header split rx needs to peek at the data,
> > > we can't do UNREADABLE_NETMEM.
> > >
> > > The patch also enables the use of a separate page pool for headers when
> > > a memory provider is installed for the queue, otherwise the same common
> > > page pool continues to be used.
> > >
> > > Signed-off-by: Saeed Mahameed <saeedm@...dia.com>
> > > Reviewed-by: Dragos Tatulea <dtatulea@...dia.com>
> > > Signed-off-by: Cosmin Ratiu <cratiu@...dia.com>
> > > Signed-off-by: Tariq Toukan <tariqt@...dia.com>
> > > Signed-off-by: Mark Bloch <mbloch@...dia.com>
> > > ---
> > > drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 ++++++++-
> > > 1 file changed, 8 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > > index 5e649705e35f..a51e204bd364 100644
> > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > > @@ -749,7 +749,9 @@ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq)
> > >
> > > static bool mlx5_rq_needs_separate_hd_pool(struct mlx5e_rq *rq)
> > > {
> > > - return false;
> > > + struct netdev_rx_queue *rxq = __netif_get_rx_queue(rq->netdev, rq->ix);
> > > +
> > > + return !!rxq->mp_params.mp_ops;
> >
> > This is kinda assuming that all future memory providers will return
> > unreadable memory, which is not a restriction I have in mind... in
> > theory there is nothing wrong with memory providers that feed readable
> > pages. Technically the right thing to do here is to define a new
> > helper page_pool_is_readable() and have the mp report to the pp if
> > it's all readable or not.
> >
> The API is already there: page_pool_is_unreadable(). But it uses the
> same logic...
>
Ugh, I was evidently not paying attention when that was added. I guess
everyone thinks memory provider == unreadable memory. I think it's
more a coincidence that the first 2 memory providers give unreadable
memory. Whatever I guess; it's good enough for now :D
> However, having a pp level API is a bit limiting: as Cosmin pointed out,
> mlx5 can't use it because it needs to know in advance if this page_pool
> is for unreadable memory to correctly size the data page_pool (with or
> without headers).
>
Yeah, in that case mlx5 would do something like:
return !rxq->mp_params.mp_ops->is_readable();
If we decided that mp's could report if they're readable or not. For
now I guess assuming all mps are unreadable is fine.
--
Thanks,
Mina
Powered by blists - more mailing lists