[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3bcd5407728640109a1868b2425132461cacc6fc.camel@kernel.org>
Date: Thu, 11 Mar 2021 14:48:52 -0800
From: Saeed Mahameed <saeed@...nel.org>
To: Tariq Toukan <ttoukan.linux@...il.com>,
Arnd Bergmann <arnd@...nel.org>,
Leon Romanovsky <leon@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Tariq Toukan <tariqt@...dia.com>,
Noam Stolero <noams@...dia.com>, Tal Gilboa <talgi@...dia.com>
Cc: Arnd Bergmann <arnd@...db.de>,
Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
Roi Dayan <roid@...dia.com>, Vlad Buslov <vladbu@...dia.com>,
Paul Blakey <paulb@...dia.com>, Oz Shlomo <ozsh@...lanox.com>,
Eli Cohen <eli@...lanox.com>,
Ariel Levkovich <lariel@...dia.com>,
Maor Dickman <maord@...dia.com>,
Tariq Toukan <tariqt@...lanox.com>, netdev@...r.kernel.org,
linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
clang-built-linux@...glegroups.com
Subject: Re: [PATCH] net/mlx5e: allocate 'indirection_rqt' buffer dynamically
On Mon, 2021-03-08 at 18:28 +0200, Tariq Toukan wrote:
>
>
> On 3/8/2021 5:32 PM, Arnd Bergmann wrote:
> > From: Arnd Bergmann <arnd@...db.de>
> >
> > Increasing the size of the indirection_rqt array from 128 to 256
> > bytes
> > pushed the stack usage of the mlx5e_hairpin_fill_rqt_rqns()
> > function
> > over the warning limit when building with clang and CONFIG_KASAN:
> >
> > drivers/net/ethernet/mellanox/mlx5/core/en_tc.c:970:1: error: stack
> > frame size of 1180 bytes in function 'mlx5e_tc_add_nic_flow' [-
> > Werror,-Wframe-larger-than=]
> >
> > Using dynamic allocation here is safe because the caller does the
> > same, and it reduces the stack usage of the function to just a few
> > bytes.
> >
> > Fixes: 1dd55ba2fb70 ("net/mlx5e: Increase indirection RQ table size
> > to 256")
> > Signed-off-by: Arnd Bergmann <arnd@...db.de>
> > ---
> > drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 16
> > +++++++++++++---
> > 1 file changed, 13 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
> > b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
> > index 0da69b98f38f..66f98618dc13 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
> > @@ -445,12 +445,16 @@ static void
> > mlx5e_hairpin_destroy_transport(struct mlx5e_hairpin *hp)
> > mlx5_core_dealloc_transport_domain(hp->func_mdev, hp->tdn);
> > }
> >
> > -static void mlx5e_hairpin_fill_rqt_rqns(struct mlx5e_hairpin *hp,
> > void *rqtc)
> > +static int mlx5e_hairpin_fill_rqt_rqns(struct mlx5e_hairpin *hp,
> > void *rqtc)
> > {
> > - u32 indirection_rqt[MLX5E_INDIR_RQT_SIZE], rqn;
> > + u32 *indirection_rqt, rqn;
> > struct mlx5e_priv *priv = hp->func_priv;
> > int i, ix, sz = MLX5E_INDIR_RQT_SIZE;
> >
> > + indirection_rqt = kzalloc(sz, GFP_KERNEL);
> > + if (!indirection_rqt)
> > + return -ENOMEM;
> > +
> > mlx5e_build_default_indir_rqt(indirection_rqt, sz,
> > hp->num_channels);
> >
> > @@ -462,6 +466,9 @@ static void mlx5e_hairpin_fill_rqt_rqns(struct
> > mlx5e_hairpin *hp, void *rqtc)
> > rqn = hp->pair->rqn[ix];
> > MLX5_SET(rqtc, rqtc, rq_num[i], rqn);
> > }
> > +
> > + kfree(indirection_rqt);
> > + return 0;
> > }
> >
> > static int mlx5e_hairpin_create_indirect_rqt(struct mlx5e_hairpin
> > *hp)
> > @@ -482,12 +489,15 @@ static int
> > mlx5e_hairpin_create_indirect_rqt(struct mlx5e_hairpin *hp)
> > MLX5_SET(rqtc, rqtc, rqt_actual_size, sz);
> > MLX5_SET(rqtc, rqtc, rqt_max_size, sz);
> >
> > - mlx5e_hairpin_fill_rqt_rqns(hp, rqtc);
> > + err = mlx5e_hairpin_fill_rqt_rqns(hp, rqtc);
> > + if (err)
> > + goto out;
> >
> > err = mlx5_core_create_rqt(mdev, in, inlen, &hp-
> > >indir_rqt.rqtn);
> > if (!err)
> > hp->indir_rqt.enabled = true;
> >
> > +out:
> > kvfree(in);
> > return err;
> > }
> >
>
> Reviewed-by: Tariq Toukan <tariqt@...dia.com>
> Thanks for your patch.
>
> Tariq
Applied to net-next-mlx5
Thanks!
Powered by blists - more mailing lists