[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <37367c26-cd08-411e-99a6-589094ca7620@intel.com>
Date: Wed, 2 Jul 2025 11:01:06 -0700
From: Jacob Keller <jacob.e.keller@...el.com>
To: Jaroslav Pulchart <jaroslav.pulchart@...ddata.com>
CC: Maciej Fijalkowski <maciej.fijalkowski@...el.com>, Jakub Kicinski
<kuba@...nel.org>, Przemek Kitszel <przemyslaw.kitszel@...el.com>,
"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>,
"Damato, Joe" <jdamato@...tly.com>, "netdev@...r.kernel.org"
<netdev@...r.kernel.org>, "Nguyen, Anthony L" <anthony.l.nguyen@...el.com>,
Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>, "Czapnik, Lukasz"
<lukasz.czapnik@...el.com>, "Dumazet, Eric" <edumazet@...gle.com>, "Zaki,
Ahmed" <ahmed.zaki@...el.com>, Martin Karsten <mkarsten@...terloo.ca>, "Igor
Raits" <igor@...ddata.com>, Daniel Secik <daniel.secik@...ddata.com>, "Zdenek
Pesek" <zdenek.pesek@...ddata.com>
Subject: Re: [Intel-wired-lan] Increased memory usage on NUMA nodes with ICE
driver after upgrade to 6.13.y (regression in commit 492a044508ad)
On 7/2/2025 2:48 AM, Jaroslav Pulchart wrote:
>>
>> On 6/30/2025 11:48 PM, Jaroslav Pulchart wrote:
>>>> On 6/30/2025 2:56 PM, Jacob Keller wrote:
>>>>> Unfortunately it looks like the fix I mentioned has landed in 6.14, so
>>>>> its not a fix for your issue (since you mentioned 6.14 has failed
>>>>> testing in your system)
>>>>>
>>>>> $ git describe --first-parent --contains --match=v* --exclude=*rc*
>>>>> 743bbd93cf29f653fae0e1416a31f03231689911
>>>>> v6.14~251^2~15^2~2
>>>>>
>>>>> I don't see any other relevant changes since v6.14. I can try to see if
>>>>> I see similar issues with CONFIG_MEM_ALLOC_PROFILING on some test
>>>>> systems here.
>>>>
>>>> On my system I see this at boot after loading the ice module from
>>>>
>>>> $ grep -F "/ice/" /proc/allocinfo | sort -g | tail | numfmt --to=iec>
>>>> 26K 230 drivers/net/ethernet/intel/ice/ice_irq.c:84 [ice]
>>>> func:ice_get_irq_res
>>>>> 48K 2 drivers/net/ethernet/intel/ice/ice_arfs.c:565 [ice] func:ice_init_arfs
>>>>> 57K 226 drivers/net/ethernet/intel/ice/ice_lib.c:397 [ice] func:ice_vsi_alloc_ring_stats
>>>>> 57K 226 drivers/net/ethernet/intel/ice/ice_lib.c:416 [ice] func:ice_vsi_alloc_ring_stats
>>>>> 85K 226 drivers/net/ethernet/intel/ice/ice_lib.c:1398 [ice] func:ice_vsi_alloc_rings
>>>>> 339K 226 drivers/net/ethernet/intel/ice/ice_lib.c:1422 [ice] func:ice_vsi_alloc_rings
>>>>> 678K 226 drivers/net/ethernet/intel/ice/ice_base.c:109 [ice] func:ice_vsi_alloc_q_vector
>>>>> 1.1M 257 drivers/net/ethernet/intel/ice/ice_fwlog.c:40 [ice] func:ice_fwlog_alloc_ring_buffs
>>>>> 7.2M 114 drivers/net/ethernet/intel/ice/ice_txrx.c:493 [ice] func:ice_setup_rx_ring
>>>>> 896M 229264 drivers/net/ethernet/intel/ice/ice_txrx.c:680 [ice] func:ice_alloc_mapped_page
>>>>
>>>> Its about 1GB for the mapped pages. I don't see any increase moment to
>>>> moment. I've started an iperf session to simulate some traffic, and I'll
>>>> leave this running to see if anything changes overnight.
>>>>
>>>> Is there anything else that you can share about the traffic setup or
>>>> otherwise that I could look into? Your system seems to use ~2.5 x the
>>>> buffer size as mine, but that might just be a smaller number of CPUs.
>>>>
>>>> Hopefully I'll get some more results overnight.
>>>
>>> The traffic is random production workloads from VMs, using standard
>>> Linux or OVS bridges. There is no specific pattern to it. I haven’t
>>> had any luck reproducing (or was not patient enough) this with iperf3
>>> myself. The two active (UP) interfaces are in an LACP bonding setup.
>>> Here are our ethtool settings for the two member ports (em1 and p3p1)
>>>
>>
>> I had iperf3 running overnight and the memory usage for
>> ice_alloc_mapped_pages is constant here. Mine was direct connections
>> without bridge or bonding. From your description I assume there's no XDP
>> happening either.
>
> Yes, no XDP in use.
>
> BTW the allocinfo after 6days uptime:
> # uptime ; sort -g /proc/allocinfo| tail -n 15
> 11:46:44 up 6 days, 2:18, 1 user, load average: 9.24, 11.33, 15.07
> 102489024 533797 fs/dcache.c:1681 func:__d_alloc
> 106229760 25935 mm/shmem.c:1854 func:shmem_alloc_folio
> 117118192 103097 fs/ext4/super.c:1388 [ext4] func:ext4_alloc_inode
> 134479872 32832 kernel/events/ring_buffer.c:811 func:perf_mmap_alloc_page
> 162783232 7656 mm/slub.c:2452 func:alloc_slab_page
> 189906944 46364 mm/memory.c:1056 func:folio_prealloc
> 499384320 121920 mm/percpu-vm.c:95 func:pcpu_alloc_pages
> 530579456 129536 mm/page_ext.c:271 func:alloc_page_ext
> 625876992 54186 mm/slub.c:2450 func:alloc_slab_page
> 838860800 400 mm/huge_memory.c:1165 func:vma_alloc_anon_folio_pmd
> 1014710272 247732 mm/filemap.c:1978 func:__filemap_get_folio
> 1056710656 257986 mm/memory.c:1054 func:folio_prealloc
> 1279262720 610 mm/khugepaged.c:1084 func:alloc_charge_folio
> 1334530048 325763 mm/readahead.c:186 func:ractl_alloc_folio
> 3341238272 412215 drivers/net/ethernet/intel/ice/ice_txrx.c:681
> [ice] func:ice_alloc_mapped_page
>
3.2GB meaning an entire GB wasted from your on-boot up :(
Unfortunately, I've had no luck trying to reproduce the conditions that
trigger this. We do have a series in flight to convert ice to page pool
which we hope resolves this.. but of course that isn't really a suitable
backport candidate.
Its quite frustrating when I can't figure out how to reproduce to
further debug where the leak is.
I also discovered that the leak sanitizer doesn't cover page allocations :(
>>
>> I guess the traffic patterns of an iperf session are too regular, or
>> something to do with bridge or bonding.. but I also struggle to see how
>> those could play a role in the buffer management in the ice driver...
Download attachment "OpenPGP_signature.asc" of type "application/pgp-signature" (237 bytes)
Powered by blists - more mailing lists