[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1e60b5ae-8015-41c5-a60d-e2a5b0d7c01b@linux.dev>
Date: Fri, 28 Jun 2024 15:57:53 +0800
From: Zhu Yanjun <yanjun.zhu@...ux.dev>
To: Anand Khoje <anand.a.khoje@...cle.com>, linux-rdma@...r.kernel.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Cc: saeedm@...lanox.com, leon@...nel.org, tariqt@...dia.com,
edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com,
davem@...emloft.net, rama.nichanamatlu@...cle.com,
manjunath.b.patil@...cle.com
Subject: Re: [PATCH v6] net/mlx5: Reclaim max 50K pages at once
在 2024/6/28 2:24, Anand Khoje 写道:
> In non FLR context, at times CX-5 requests release of ~8 million FW pages.
> This needs humongous number of cmd mailboxes, which to be released once
> the pages are reclaimed. Release of humongous number of cmd mailboxes is
> consuming cpu time running into many seconds. Which with non preemptible
> kernels is leading to critical process starving on that cpu’s RQ.
> To alleviate this, this change restricts the total number of pages
> a worker will try to reclaim maximum 50K pages in one go.
> The limit 50K is aligned with the current firmware capacity/limit of
> releasing 50K pages at once per MLX5_CMD_OP_MANAGE_PAGES + MLX5_PAGES_TAKE
> device command.
>
> Our tests have shown significant benefit of this change in terms of
> time consumed by dma_pool_free().
> During a test where an event was raised by HCA
> to release 1.3 Million pages, following observations were made:
>
> - Without this change:
> Number of mailbox messages allocated was around 20K, to accommodate
> the DMA addresses of 1.3 million pages.
> The average time spent by dma_pool_free() to free the DMA pool is between
> 16 usec to 32 usec.
> value ------------- Distribution ------------- count
> 256 | 0
> 512 |@ 287
> 1024 |@@@ 1332
> 2048 |@ 656
> 4096 |@@@@@ 2599
> 8192 |@@@@@@@@@@ 4755
> 16384 |@@@@@@@@@@@@@@@ 7545
> 32768 |@@@@@ 2501
> 65536 | 0
>
> - With this change:
> Number of mailbox messages allocated was around 800; this was to
> accommodate DMA addresses of only 50K pages.
> The average time spent by dma_pool_free() to free the DMA pool in this case
> lies between 1 usec to 2 usec.
> value ------------- Distribution ------------- count
> 256 | 0
> 512 |@@@@@@@@@@@@@@@@@@ 346
> 1024 |@@@@@@@@@@@@@@@@@@@@@@ 435
> 2048 | 0
> 4096 | 0
> 8192 | 1
> 16384 | 0
>
> Signed-off-by: Anand Khoje <anand.a.khoje@...cle.com>
> ---
> Changes in v6
> - Added comments to explain usage os negative MAX_RECLAIM_NPAGES
> ---
> drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 16 +++++++++++++++-
> 1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> index d894a88..972e8e9 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
> @@ -608,6 +608,11 @@ enum {
> RELEASE_ALL_PAGES_MASK = 0x4000,
> };
>
> +/* This limit is based on the capability of the firmware as it cannot release
> + * more than 50000 back to the host in one go.
> + */
> +#define MAX_RECLAIM_NPAGES (-50000)
> +
> static int req_pages_handler(struct notifier_block *nb,
> unsigned long type, void *data)
> {
> @@ -639,7 +644,16 @@ static int req_pages_handler(struct notifier_block *nb,
>
> req->dev = dev;
> req->func_id = func_id;
> - req->npages = npages;
> +
> + /* npages > 0 means HCA asking host to allocate/give pages,
> + * npages < 0 means HCA asking host to reclaim back the pages allocated.
> + * Here we are restricting the maximum number of pages that can be
> + * reclaimed to be MAX_RECLAIM_NPAGES. Note that MAX_RECLAIM_NPAGES is
> + * a negative value.
> + * Since MAX_RECLAIM is negative, we are using max() to restrict
> + * req->npages (and not min ()).
> + */
Reviewed-by: Zhu Yanjun <yanjun.zhu@...ux.dev>
Thanks,
Zhu Yanjun
> + req->npages = max_t(s32, npages, MAX_RECLAIM_NPAGES);
> req->ec_function = ec_function;
> req->release_all = release_all;
> INIT_WORK(&req->work, pages_work_handler);
Powered by blists - more mailing lists