[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <epa72y2pomluhhbeziycpnpcraiemt37xmjk34c4h5n7exgqhr@an2rnvzbjx2v>
Date: Wed, 29 Oct 2025 16:43:53 +0000
From: Dragos Tatulea <dtatulea@...dia.com>
To: Simon Horman <horms@...nel.org>, Tariq Toukan <tariqt@...dia.com>
Cc: Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>, Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>, Mark Bloch <mbloch@...dia.com>, netdev@...r.kernel.org,
linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org, Gal Pressman <gal@...dia.com>
Subject: Re: [PATCH net 1/3] net/mlx5e: SHAMPO, Fix header mapping for 64K
pages
On Wed, Oct 29, 2025 at 03:51:18PM +0000, Simon Horman wrote:
> On Tue, Oct 28, 2025 at 08:47:17AM +0200, Tariq Toukan wrote:
> > From: Dragos Tatulea <dtatulea@...dia.com>
> >
> > HW-GRO is broken on mlx5 for 64K page sizes. The patch in the fixes tag
> > didn't take into account larger page sizes when doing an align down
> > of max_ksm_entries. For 64K page size, max_ksm_entries is 0 which will skip
> > mapping header pages via WQE UMR. This breaks header-data split
> > and will result in the following syndrome:
> >
> > mlx5_core 0000:00:08.0 eth2: Error cqe on cqn 0x4c9, ci 0x0, qn 0x1133, opcode 0xe, syndrome 0x4, vendor syndrome 0x32
> > 00000000: 00 00 00 00 04 4a 00 00 00 00 00 00 20 00 93 32
> > 00000010: 55 00 00 00 fb cc 00 00 00 00 00 00 07 18 00 00
> > 00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 4a
> > 00000030: 00 00 3b c7 93 01 32 04 00 00 00 00 00 00 bf e0
> > mlx5_core 0000:00:08.0 eth2: ERR CQE on RQ: 0x1133
> >
> > Furthermore, the function that fills in WQE UMRs for the headers
> > (mlx5e_build_shampo_hd_umr()) only supports mapping page sizes that
> > fit in a single UMR WQE.
> >
> > This patch goes back to the old non-aligned max_ksm_entries value and it
> > changes mlx5e_build_shampo_hd_umr() to support mapping a large page over
> > multiple UMR WQEs.
> >
> > This means that mlx5e_build_shampo_hd_umr() can now leave a page only
> > partially mapped. The caller, mlx5e_build_shampo_hd_umr(), ensures that
>
> It's not particularly important, but I think the caller is
> mlx5e_alloc_rx_hd_mpwqe().
>
Right. Sorry. Will fix it.
> > there are enough UMR WQEs to cover complete pages by working on
> > ksm_entries that are multiples of MLX5E_SHAMPO_WQ_HEADER_PER_PAGE.
> >
> > Fixes: 8a0ee54027b1 ("net/mlx5e: SHAMPO, Simplify UMR allocation for headers")
> > Signed-off-by: Dragos Tatulea <dtatulea@...dia.com>
> > Signed-off-by: Tariq Toukan <tariqt@...dia.com>
> > ---
> > .../net/ethernet/mellanox/mlx5/core/en_rx.c | 34 +++++++++----------
> > 1 file changed, 16 insertions(+), 18 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > index 1c79adc51a04..77f7a1ca091d 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > @@ -679,25 +679,24 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq,
> > umr_wqe = mlx5_wq_cyc_get_wqe(&sq->wq, pi);
> > build_ksm_umr(sq, umr_wqe, shampo->mkey_be, index, ksm_entries);
> >
> > - WARN_ON_ONCE(ksm_entries & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1));
> > - while (i < ksm_entries) {
> > - struct mlx5e_frag_page *frag_page = mlx5e_shampo_hd_to_frag_page(rq, index);
> > + for ( ; i < ksm_entries; i++, index++) {
>
> Also, if you have to respin for some reason, I would move the
> initialisation of i to 0 from it's declaration to the for loop.
>
> ...
If Tariq respins, I will change it.
Thanks,
Dragos
Powered by blists - more mailing lists