[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z6nLtN5rn68kY4i0@mev-dev.igk.intel.com>
Date: Mon, 10 Feb 2025 10:49:49 +0100
From: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
To: Tariq Toukan <tariqt@...dia.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Andrew Lunn <andrew+netdev@...n.ch>, netdev@...r.kernel.org,
Saeed Mahameed <saeedm@...dia.com>, Gal Pressman <gal@...dia.com>,
Leon Romanovsky <leonro@...dia.com>,
Simon Horman <horms@...nel.org>,
Donald Hunter <donald.hunter@...il.com>,
Jiri Pirko <jiri@...nulli.us>, Jonathan Corbet <corbet@....net>,
Leon Romanovsky <leon@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Richard Cochran <richardcochran@...il.com>,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
linux-rdma@...r.kernel.org, bpf@...r.kernel.org,
William Tu <witu@...dia.com>, Bodong Wang <bodong@...dia.com>
Subject: Re: [PATCH net-next 07/15] net/mlx5e: reduce rep rxq depth to 256
for ECPF
On Sun, Feb 09, 2025 at 12:17:08PM +0200, Tariq Toukan wrote:
> From: William Tu <witu@...dia.com>
>
> By experiments, a single queue representor netdev consumes kernel
> memory around 2.8MB, and 1.8MB out of the 2.8MB is due to page
> pool for the RXQ. Scaling to a thousand representors consumes 2.8GB,
> which becomes a memory pressure issue for embedded devices such as
> BlueField-2 16GB / BlueField-3 32GB memory.
>
> Since representor netdevs mostly handles miss traffic, and ideally,
> most of the traffic will be offloaded, reduce the default non-uplink
> rep netdev's RXQ default depth from 1024 to 256 if mdev is ecpf eswitch
> manager. This saves around 1MB of memory per regular RQ,
> (1024 - 256) * 2KB, allocated from page pool.
>
> With rxq depth of 256, the netlink page pool tool reports
> $./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
> --dump page-pool-get
> {'id': 277,
> 'ifindex': 9,
> 'inflight': 128,
> 'inflight-mem': 786432,
> 'napi-id': 775}]
>
> This is due to mtu 1500 + headroom consumes half pages, so 256 rxq
> entries consumes around 128 pages (thus create a page pool with
> size 128), shown above at inflight.
>
> Note that each netdev has multiple types of RQs, including
> Regular RQ, XSK, PTP, Drop, Trap RQ. Since non-uplink representor
> only supports regular rq, this patch only changes the regular RQ's
> default depth.
>
> Signed-off-by: William Tu <witu@...dia.com>
> Reviewed-by: Bodong Wang <bodong@...dia.com>
> Reviewed-by: Saeed Mahameed <saeedm@...dia.com>
> Signed-off-by: Tariq Toukan <tariqt@...dia.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en_rep.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
> index fdff9fd8a89e..da399adc8854 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
> @@ -65,6 +65,7 @@
> #define MLX5E_REP_PARAMS_DEF_LOG_SQ_SIZE \
> max(0x7, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE)
> #define MLX5E_REP_PARAMS_DEF_NUM_CHANNELS 1
> +#define MLX5E_REP_PARAMS_DEF_LOG_RQ_SIZE 0x8
>
> static const char mlx5e_rep_driver_name[] = "mlx5e_rep";
>
> @@ -855,6 +856,8 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
>
> /* RQ */
> mlx5e_build_rq_params(mdev, params);
> + if (!mlx5e_is_uplink_rep(priv) && mlx5_core_is_ecpf(mdev))
> + params->log_rq_mtu_frames = MLX5E_REP_PARAMS_DEF_LOG_RQ_SIZE;
>
> /* If netdev is already registered (e.g. move from nic profile to uplink,
> * RTNL lock must be held before triggering netdev notifiers.
Thanks for detailed commit message.
Reviewed-by: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
> --
> 2.45.0
Powered by blists - more mailing lists