lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c130df76-9b18-40a9-9b0c-7ad21fd6625b@gmail.com>
Date: Thu, 6 Feb 2025 14:57:59 +0200
From: Tariq Toukan <ttoukan.linux@...il.com>
To: Jakub Kicinski <kuba@...nel.org>, davem@...emloft.net
Cc: netdev@...r.kernel.org, edumazet@...gle.com, pabeni@...hat.com,
 andrew+netdev@...n.ch, horms@...nel.org, tariqt@...dia.com, hawk@...nel.org
Subject: Re: [PATCH net-next 0/4] eth: mlx4: use the page pool for Rx buffers



On 05/02/2025 5:12, Jakub Kicinski wrote:
> Convert mlx4 to page pool. I've been sitting on these patches for
> over a year, and Jonathan Lemon had a similar series years before.
> We never deployed it or sent upstream because it didn't really show
> much perf win under normal load (admittedly I think the real testing
> was done before Ilias's work on recycling).
> 
> During the v6.9 kernel rollout Meta's CDN team noticed that machines
> with CX3 Pro (mlx4) are prone to overloads (double digit % of CPU time
> spent mapping buffers in the IOMMU). The problem does not occur with
> modern NICs, so I dusted off this series and reportedly it still works.
> And it makes the problem go away, no overloads, perf back in line with
> older kernels. Something must have changed in IOMMU code, I guess.
> 
> This series is very simple, and can very likely be optimized further.
> Thing is, I don't have access to any CX3 Pro NICs. They only exist
> in CDN locations which haven't had a HW refresh for a while. So I can
> say this series survives a week under traffic w/ XDP enabled, but
> my ability to iterate and improve is a bit limited.

Hi Jakub,

Thanks for your patches.

As this series touches critical data-path area, and you had no real 
option of testing it, we are taking it through a regression cycle, in 
parallel to the code review.

We should have results early next week. We'll update.

Regards,
Tariq

> 
> Jakub Kicinski (4):
>    eth: mlx4: create a page pool for Rx
>    eth: mlx4: don't try to complete XDP frames in netpoll
>    eth: mlx4: remove the local XDP fast-recycling ring
>    eth: mlx4: use the page pool for Rx buffers
> 
>   drivers/net/ethernet/mellanox/mlx4/mlx4_en.h |  15 +--
>   drivers/net/ethernet/mellanox/mlx4/en_rx.c   | 120 +++++++------------
>   drivers/net/ethernet/mellanox/mlx4/en_tx.c   |  17 ++-
>   3 files changed, 53 insertions(+), 99 deletions(-)
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ