[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240619085934.GL4025@unreal>
Date: Wed, 19 Jun 2024 11:59:34 +0300
From: Leon Romanovsky <leon@...nel.org>
To: Anand Khoje <anand.a.khoje@...cle.com>
Cc: linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, saeedm@...lanox.com, davem@...emloft.net
Subject: Re: [PATCH v3] net/mlx5 : Reclaim max 50K pages at once
On Tue, Jun 18, 2024 at 11:14:33PM +0530, Anand Khoje wrote:
>
> On 6/16/24 21:14, Leon Romanovsky wrote:
> > On Fri, Jun 14, 2024 at 01:31:35PM +0530, Anand Khoje wrote:
> > > In non FLR context, at times CX-5 requests release of ~8 million FW pages.
> > > This needs humongous number of cmd mailboxes, which to be released once
> > > the pages are reclaimed. Release of humongous number of cmd mailboxes is
> > > consuming cpu time running into many seconds. Which with non preemptible
> > > kernels is leading to critical process starving on that cpu’s RQ.
> > > To alleviate this, this change restricts the total number of pages
> > > a worker will try to reclaim maximum 50K pages in one go.
> > > The limit 50K is aligned with the current firmware capacity/limit of
> > > releasing 50K pages at once per MLX5_CMD_OP_MANAGE_PAGES + MLX5_PAGES_TAKE
> > > device command.
> > >
> > > Our tests have shown significant benefit of this change in terms of
> > > time consumed by dma_pool_free().
> > > During a test where an event was raised by HCA
> > > to release 1.3 Million pages, following observations were made:
> > >
> > > - Without this change:
> > > Number of mailbox messages allocated was around 20K, to accommodate
> > > the DMA addresses of 1.3 million pages.
> > > The average time spent by dma_pool_free() to free the DMA pool is between
> > > 16 usec to 32 usec.
> > > value ------------- Distribution ------------- count
> > > 256 | 0
> > > 512 |@ 287
> > > 1024 |@@@ 1332
> > > 2048 |@ 656
> > > 4096 |@@@@@ 2599
> > > 8192 |@@@@@@@@@@ 4755
> > > 16384 |@@@@@@@@@@@@@@@ 7545
> > > 32768 |@@@@@ 2501
> > > 65536 | 0
> > >
> > > - With this change:
> > > Number of mailbox messages allocated was around 800; this was to
> > > accommodate DMA addresses of only 50K pages.
> > > The average time spent by dma_pool_free() to free the DMA pool in this case
> > > lies between 1 usec to 2 usec.
> > > value ------------- Distribution ------------- count
> > > 256 | 0
> > > 512 |@@@@@@@@@@@@@@@@@@ 346
> > > 1024 |@@@@@@@@@@@@@@@@@@@@@@ 435
> > > 2048 | 0
> > > 4096 | 0
> > > 8192 | 1
> > > 16384 | 0
> > >
> > > Signed-off-by: Anand Khoje <anand.a.khoje@...cle.com>
> > > ---
> > > Changes in v3:
> > > - Shifted the logic to function req_pages_handler() as per
> > > Leon's suggestion.
> > > ---
> > > drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 7 ++++++-
> > > 1 file changed, 6 insertions(+), 1 deletion(-)
> > >
> > The title has extra space:
> > "net/mlx5 : Reclaim max 50K pages at once" -> "net/mlx5: Reclaim max 50K pages at once"
> >
> > But the code looks good to me.
> >
> > Thanks,
> > Reviewed-by: Leon Romanovsky <leonro@...dia.com>
>
> Hi Leon,
>
> Thanks for providing the R-B. Should I send a v4 with the fix for the extra
> space issue?
Yes, please.
And run get_maintainer.pl to get the correct email address for the maintainers and ML.
This patch will be applied by netdev maintainers.
Thanks
>
> -Anand
>
Powered by blists - more mailing lists