lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <366dbe9f-af4d-48ec-879e-1ac54cd5f3b6@intel.com>
Date: Mon, 30 Jun 2025 16:16:00 -0700
From: Jacob Keller <jacob.e.keller@...el.com>
To: Jaroslav Pulchart <jaroslav.pulchart@...ddata.com>
CC: Maciej Fijalkowski <maciej.fijalkowski@...el.com>, Jakub Kicinski
	<kuba@...nel.org>, Przemek Kitszel <przemyslaw.kitszel@...el.com>,
	"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>,
	"Damato, Joe" <jdamato@...tly.com>, "netdev@...r.kernel.org"
	<netdev@...r.kernel.org>, "Nguyen, Anthony L" <anthony.l.nguyen@...el.com>,
	Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>, "Czapnik, Lukasz"
	<lukasz.czapnik@...el.com>, "Dumazet, Eric" <edumazet@...gle.com>, "Zaki,
 Ahmed" <ahmed.zaki@...el.com>, Martin Karsten <mkarsten@...terloo.ca>, "Igor
 Raits" <igor@...ddata.com>, Daniel Secik <daniel.secik@...ddata.com>, "Zdenek
 Pesek" <zdenek.pesek@...ddata.com>
Subject: Re: [Intel-wired-lan] Increased memory usage on NUMA nodes with ICE
 driver after upgrade to 6.13.y (regression in commit 492a044508ad)



On 6/30/2025 2:56 PM, Jacob Keller wrote:
> Unfortunately it looks like the fix I mentioned has landed in 6.14, so
> its not a fix for your issue (since you mentioned 6.14 has failed
> testing in your system)
> 
> $ git describe --first-parent --contains --match=v* --exclude=*rc*
> 743bbd93cf29f653fae0e1416a31f03231689911
> v6.14~251^2~15^2~2
> 
> I don't see any other relevant changes since v6.14. I can try to see if
> I see similar issues with CONFIG_MEM_ALLOC_PROFILING on some test
> systems here.

On my system I see this at boot after loading the ice module from

$ grep -F "/ice/" /proc/allocinfo | sort -g | tail | numfmt --to=iec>
      26K      230 drivers/net/ethernet/intel/ice/ice_irq.c:84 [ice]
func:ice_get_irq_res
>          48K        2 drivers/net/ethernet/intel/ice/ice_arfs.c:565 [ice] func:ice_init_arfs
>          57K      226 drivers/net/ethernet/intel/ice/ice_lib.c:397 [ice] func:ice_vsi_alloc_ring_stats
>          57K      226 drivers/net/ethernet/intel/ice/ice_lib.c:416 [ice] func:ice_vsi_alloc_ring_stats
>          85K      226 drivers/net/ethernet/intel/ice/ice_lib.c:1398 [ice] func:ice_vsi_alloc_rings
>         339K      226 drivers/net/ethernet/intel/ice/ice_lib.c:1422 [ice] func:ice_vsi_alloc_rings
>         678K      226 drivers/net/ethernet/intel/ice/ice_base.c:109 [ice] func:ice_vsi_alloc_q_vector
>         1.1M      257 drivers/net/ethernet/intel/ice/ice_fwlog.c:40 [ice] func:ice_fwlog_alloc_ring_buffs
>         7.2M      114 drivers/net/ethernet/intel/ice/ice_txrx.c:493 [ice] func:ice_setup_rx_ring
>         896M   229264 drivers/net/ethernet/intel/ice/ice_txrx.c:680 [ice] func:ice_alloc_mapped_page

Its about 1GB for the mapped pages. I don't see any increase moment to
moment. I've started an iperf session to simulate some traffic, and I'll
leave this running to see if anything changes overnight.

Is there anything else that you can share about the traffic setup or
otherwise that I could look into?  Your system seems to use ~2.5 x the
buffer size as mine, but that might just be a smaller number of CPUs.

Hopefully I'll get some more results overnight.

Thanks,
Jake


Download attachment "OpenPGP_signature.asc" of type "application/pgp-signature" (237 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ