[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK8fFZ47qh=ezYwQ5hRPxmwidOkTj_ueQsKz9G7erp0RPtaQ3Q@mail.gmail.com>
Date: Fri, 11 Jul 2025 20:16:56 +0200
From: Jaroslav Pulchart <jaroslav.pulchart@...ddata.com>
To: Jacob Keller <jacob.e.keller@...el.com>
Cc: Maciej Fijalkowski <maciej.fijalkowski@...el.com>, Jakub Kicinski <kuba@...nel.org>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>, "Damato, Joe" <jdamato@...tly.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>, "Nguyen, Anthony L" <anthony.l.nguyen@...el.com>,
Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>,
"Czapnik, Lukasz" <lukasz.czapnik@...el.com>, "Dumazet, Eric" <edumazet@...gle.com>,
"Zaki, Ahmed" <ahmed.zaki@...el.com>, Martin Karsten <mkarsten@...terloo.ca>,
Igor Raits <igor@...ddata.com>, Daniel Secik <daniel.secik@...ddata.com>,
Zdenek Pesek <zdenek.pesek@...ddata.com>
Subject: Re: [Intel-wired-lan] Increased memory usage on NUMA nodes with ICE
driver after upgrade to 6.13.y (regression in commit 492a044508ad)
>
>
>
> On 7/9/2025 2:04 PM, Jaroslav Pulchart wrote:
> >>
> >>
> >> On 7/8/2025 5:50 PM, Jacob Keller wrote:
> >>>
> >>>
> >>> On 7/7/2025 3:03 PM, Jacob Keller wrote:
> >>>> Bad news: my hypothesis was incorrect.
> >>>>
> >>>> Good news: I can immediately see the problem if I set MTU to 9K and
> >>>> start an iperf3 session and just watch the count of allocations from
> >>>> ice_alloc_mapped_pages(). It goes up consistently, so I can quickly tell
> >>>> if a change is helping.
> >>>>
> >>>> I ported the stats from i40e for tracking the page allocations, and I
> >>>> can see that we're allocating new pages despite not actually performing
> >>>> releases.
> >>>>
> >>>> I don't yet have a good understanding of what causes this, and the logic
> >>>> in ice is pretty hard to track...
> >>>>
> >>>> I'm going to try the page pool patches myself to see if this test bed
> >>>> triggers the same problems. Unfortunately I think I need someone else
> >>>> with more experience with the hotpath code to help figure out whats
> >>>> going wrong here...
> >>>
> >>> I believe I have isolated this and figured out the issue: With 9K MTU,
> >>> sometimes the hardware posts a multi-buffer frame with an extra
> >>> descriptor that has a size of 0 bytes with no data in it. When this
> >>> happens, our logic for tracking buffers fails to free this buffer. We
> >>> then later overwrite the page because we failed to either free or re-use
> >>> the page, and our overwriting logic doesn't verify this.
> >>>
> >>> I will have a fix with a more detailed description posted tomorrow.
> >>
> >> @Jaroslav, I've posted a fix which I believe should resolve your issue:
> >>
> >> https://lore.kernel.org/intel-wired-lan/20250709-jk-ice-fix-rx-mem-leak-v1-1-cfdd7eeea905@intel.com/T/#u
> >>
> >> I am reasonably confident it should resolve the issue you reported. If
> >> possible, it would be appreciated if you could test it and report back
> >> to confirm.
> >
> > @Jacob that’s excellent news!
> >
> > I’ve built and installed 6.15.5 with your patch on one of our servers
> > (strange that I had to disable CONFIG_MEM_ALLOC_PROFILING with this
> > patch or the kernel wouldn’t boot) and started a VM running our
> > production traffic. I’ll let it run for a day-two, observe the memory
> > utilization per NUMA node and report back.
>
> Great! A bit odd you had to disable CONFIG_MEM_ALLOC_PROFILING. I didn't
> have trouble on my kernel with it enabled.
Status update after ~45h of uptime. So far so good, I do not see
continuous memory consumption increase on home numa nodes like before.
See attached "status_before_after_45h_uptime.png" comparison.
Download attachment "status_before_after_45h_uptime.png" of type "image/png" (355801 bytes)
Powered by blists - more mailing lists