[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <14e219dd-a253-406b-8bfd-9b33f023c963@linux.dev>
Date: Tue, 24 Jun 2025 15:23:45 -0700
From: "yanjun.zhu" <yanjun.zhu@...ux.dev>
To: Fushuai Wang <wangfushuai@...du.com>, saeedm@...dia.com,
tariqt@...dia.com, leon@...nel.org, andrew+netdev@...n.ch,
davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com
Cc: netdev@...r.kernel.org, linux-rdma@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next] net/mlx5e: Fix error handling in RQ memory model
registration
On 6/24/25 7:07 AM, Fushuai Wang wrote:
> Currently when xdp_rxq_info_reg_mem_model() fails in the XSK path, the
> error handling incorrectly jumps to err_destroy_page_pool. While this
> may not cause errors, we should make it jump to the correct location.
In the page_pool_destroy function, if pool is NULL, the function will
simply return, so the goto err_destroy_page_pool statement will not lead
to any issues.
However, this commit does improve the clarity of the logic.
Looks good to me.
void page_pool_destroy(struct page_pool *pool)
{
if (!pool)
return;
if (!page_pool_put(pool))
return;
page_pool_disable_direct_recycling(pool);
page_pool_free_frag(pool);
if (!page_pool_release(pool))
return;
page_pool_detached(pool);
pool->defer_start = jiffies;
pool->defer_warn = jiffies + DEFER_WARN_INTERVAL;
INIT_DELAYED_WORK(&pool->release_dw, page_pool_release_retry);
schedule_delayed_work(&pool->release_dw, DEFER_TIME);
}
Reviewed-by: Zhu Yanjun <yanjun.zhu@...ux.dev>
Zhu Yanjun
>
> Signed-off-by: Fushuai Wang <wangfushuai@...du.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> index ea822c69d137..1e3ba51b7995 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> @@ -915,6 +915,8 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
> if (xsk) {
> err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
> MEM_TYPE_XSK_BUFF_POOL, NULL);
> + if (err)
> + goto err_free_by_rq_type;
> xsk_pool_set_rxq_info(rq->xsk_pool, &rq->xdp_rxq);
> } else {
> /* Create a page_pool and register it with rxq */
> @@ -941,12 +943,13 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
> rq->page_pool = NULL;
> goto err_free_by_rq_type;
> }
> - if (xdp_rxq_info_is_reg(&rq->xdp_rxq))
> + if (xdp_rxq_info_is_reg(&rq->xdp_rxq)) {
> err = xdp_rxq_info_reg_mem_model(&rq->xdp_rxq,
> MEM_TYPE_PAGE_POOL, rq->page_pool);
> + if (err)
> + goto err_destroy_page_pool;
> + }
> }
> - if (err)
> - goto err_destroy_page_pool;
>
> for (i = 0; i < wq_sz; i++) {
> if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) {
Powered by blists - more mailing lists