[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b8d13df786ea392b5337e0080bc9eaedffa95fef.camel@nvidia.com>
Date: Mon, 24 Apr 2023 11:59:22 +0000
From: Dragos Tatulea <dtatulea@...dia.com>
To: "kuba@...nel.org" <kuba@...nel.org>,
"saeed@...nel.org" <saeed@...nel.org>
CC: Tariq Toukan <tariqt@...dia.com>,
Saeed Mahameed <saeedm@...dia.com>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>
Subject: Re: [net-next 11/15] net/mlx5e: RX, Hook NAPIs to page pools
On Thu, 2023-04-20 at 19:13 -0700, Jakub Kicinski wrote:
> On Thu, 20 Apr 2023 18:38:46 -0700 Saeed Mahameed wrote:
> > From: Dragos Tatulea <dtatulea@...dia.com>
> >
> > Linking the NAPI to the rq page_pool to improve page_pool cache
> > usage during skb recycling.
> >
> > Here are the observed improvements for a iperf single stream
> > test case:
> >
> > - For 1500 MTU and legacy rq, seeing a 20% improvement of cache
> > usage.
> >
> > - For 9K MTU, seeing 33-40 % page_pool cache usage improvements for
> > both striding and legacy rq (depending if the application is
> > running on
> > the same core as the rq or not).
>
> I think you'll need a strategically placed page_pool_unlink_napi()
> once
> https://lore.kernel.org/all/20230419182006.719923-1-kuba@kernel.org/
> gets merged (which should me in minutes). Would you be able to follow
> up on this tomorrow?
Thanks for the tip Jakub.
There's no "swap" stage in mlx5 (the page pool is destroyed while NAPI
is still disabled). So I think the page_pool_unlink_napi that you added
in page_pool_destroy is sufficient.
Thanks,
Dragos
Powered by blists - more mailing lists