lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZpCI0mGJaNDFjMno@x130>
Date: Thu, 11 Jul 2024 18:37:22 -0700
From: Saeed Mahameed <saeed@...nel.org>
To: Anand Khoje <anand.a.khoje@...cle.com>
Cc: linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org, saeedm@...lanox.com, leon@...nel.org,
	tariqt@...dia.com, edumazet@...gle.com, kuba@...nel.org,
	pabeni@...hat.com, davem@...emloft.net,
	rama.nichanamatlu@...cle.com, manjunath.b.patil@...cle.com
Subject: Re: [PATCH net-next] net/mlx5: Reclaim max 50K pages at once

On 11 Jul 20:43, Anand Khoje wrote:
>In non FLR context, at times CX-5 requests release of ~8 million FW pages.
>This needs humongous number of cmd mailboxes, which to be released once
>the pages are reclaimed. Release of humongous number of cmd mailboxes is
>consuming cpu time running into many seconds. Which with non preemptible
>kernels is leading to critical process starving on that cpu’s RQ.
>To alleviate this, this change restricts the total number of pages
>a worker will try to reclaim maximum 50K pages in one go.
>The limit 50K is aligned with the current firmware capacity/limit of
>releasing 50K pages at once per MLX5_CMD_OP_MANAGE_PAGES + MLX5_PAGES_TAKE
>device command.

Where do you see this FW limit? currently we don't have it in the driver,
the driver requests from FW to reclaim exactly as many pages as the FW
already sent in the initial event. It is up to the FW to decide how many
pages out of those it actually release to the driver.

>
>Our tests have shown significant benefit of this change in terms of
>time consumed by dma_pool_free().
>During a test where an event was raised by HCA
>to release 1.3 Million pages, following observations were made:
>
>- Without this change:
>Number of mailbox messages allocated was around 20K, to accommodate
>the DMA addresses of 1.3 million pages.
>The average time spent by dma_pool_free() to free the DMA pool is between
>16 usec to 32 usec.
>           value  ------------- Distribution ------------- count
>             256 |                                         0
>             512 |@                                        287
>            1024 |@@@                                      1332
>            2048 |@                                        656
>            4096 |@@@@@                                    2599
>            8192 |@@@@@@@@@@                               4755
>           16384 |@@@@@@@@@@@@@@@                          7545
>           32768 |@@@@@                                    2501
>           65536 |                                         0
>
>- With this change:
>Number of mailbox messages allocated was around 800; this was to
>accommodate DMA addresses of only 50K pages.
>The average time spent by dma_pool_free() to free the DMA pool in this case
>lies between 1 usec to 2 usec.
>           value  ------------- Distribution ------------- count
>             256 |                                         0
>             512 |@@@@@@@@@@@@@@@@@@                       346
>            1024 |@@@@@@@@@@@@@@@@@@@@@@                   435
>            2048 |                                         0
>            4096 |                                         0
>            8192 |                                         1
>           16384 |                                         0
>

Sounds like you only release 50k pages out of the 1.3M! what happens to the
rest? eventually we need to release them and waiting for driver unload
isn't an option.

My theory here of what happened before the patch:
1. FW: event to request to release 1.3M;
2. driver: prepare a FW command to release 1.3M, send it to FW with 1.3M;
3. FW: release 50K;
4. goto 1;

After the patch:
1. FW: event to request to release 1.3M;
2. driver: prepare a FW command to release 50k**, send it to FW with 50k*;
3. FW: release 50K; Driver didn't ask for more. no event required.
4. Done;

After your patch it seems like there 1.25M pages that are lingering in FW
ownership with no use.

>Signed-off-by: Anand Khoje <anand.a.khoje@...cle.com>
>Reviewed-by: Leon Romanovsky <leonro@...dia.com>
>Reviewed-by: Zhu Yanjun <yanjun.zhu@...ux.dev>
>---
> drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 16 +++++++++++++++-
> 1 file changed, 15 insertions(+), 1 deletion(-)
>
>diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
>index d894a88..972e8e9 100644
>--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
>+++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
>@@ -608,6 +608,11 @@ enum {
> 	RELEASE_ALL_PAGES_MASK = 0x4000,
> };
>
>+/* This limit is based on the capability of the firmware as it cannot release
>+ * more than 50000 back to the host in one go.
>+ */
>+#define MAX_RECLAIM_NPAGES (-50000)
>+
> static int req_pages_handler(struct notifier_block *nb,
> 			     unsigned long type, void *data)
> {
>@@ -639,7 +644,16 @@ static int req_pages_handler(struct notifier_block *nb,
>
> 	req->dev = dev;
> 	req->func_id = func_id;
>-	req->npages = npages;
>+
>+	/* npages > 0 means HCA asking host to allocate/give pages,
>+	 * npages < 0 means HCA asking host to reclaim back the pages allocated.
>+	 * Here we are restricting the maximum number of pages that can be
>+	 * reclaimed to be MAX_RECLAIM_NPAGES. Note that MAX_RECLAIM_NPAGES is
>+	 * a negative value.
>+	 * Since MAX_RECLAIM is negative, we are using max() to restrict
>+	 * req->npages (and not min ()).
>+	 */
>+	req->npages = max_t(s32, npages, MAX_RECLAIM_NPAGES);
> 	req->ec_function = ec_function;
> 	req->release_all = release_all;
> 	INIT_WORK(&req->work, pages_work_handler);
>-- 
>1.8.3.1
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ