[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC5umyhrcb963mA9dUaZswJKq2MF_OVaipNmy=L4J7u6rPjfqA@mail.gmail.com>
Date: Mon, 26 Jan 2026 10:57:11 +0900
From: Akinobu Mita <akinobu.mita@...il.com>
To: Gregory Price <gourry@...rry.net>
Cc: Michal Hocko <mhocko@...e.com>, linux-cxl@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, akpm@...ux-foundation.org,
axelrasmussen@...gle.com, yuanchu@...gle.com, weixugc@...gle.com,
hannes@...xchg.org, david@...nel.org, zhengqi.arch@...edance.com,
shakeel.butt@...ux.dev, lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com,
vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com, ziy@...dia.com,
matthew.brost@...el.com, joshua.hahnjy@...il.com, rakie.kim@...com,
byungchul@...com, ying.huang@...ux.alibaba.com, apopple@...dia.com,
bingjiao@...gle.com, jonathan.cameron@...wei.com,
pratyush.brahma@....qualcomm.com
Subject: Re: [PATCH v4 3/3] mm/vmscan: don't demote if there is not enough
free memory in the lower memory tier
2026年1月23日(金) 1:39 Gregory Price <gourry@...rry.net>:
>
> On Thu, Jan 22, 2026 at 09:32:51AM +0900, Akinobu Mita wrote:
> > Almost all of the execution time is consumed by folio_alloc_swap(),
> > and analysis using Flame Graph reveals that spinlock contention is
> > occurring in the call path __mem_cgroup_try_charge_swap ->
> > __memcg_memory_event -> cgroup_file_notify.
> >
> > In this reproduction procedure, no swap is configured, and calls to
> > folio_alloc_swap() always fail. To avoid spinlock contention, I tried
> > modifying the source code to return -ENOMEM without calling
> > folio_alloc_swap(), but this caused other lock contention
> > (lruvec->lru_lock in evict_folios()) in several other places, so it
> > did not work around the problem.
>
> Doesn't this suggest what I mentioned earlier? If you don't demote when
> the target node is full, then you're removing a memory pressure signal
> from the lower node and reclaim won't ever clean up the lower node to
> make room for future demotions.
Thank you for your analysis.
Now I finally understand the concerns (though I'll need to learn more
to find a solution...)
> I might be missing something here, though, is your system completely out
> of memory at this point?
>
> Presumably you're hitting direct reclaim and not just waking up kswapd
> because things are locking up.
>
> If there's no swap and no where to demote, then this all sounds like
> normal OOM behavior.
>
> Does this whole thing go away if you configure some swap space?
I tried it and found that the same issue occurred when I ran a
stress-ng-memrate workload that exceeded the combined memory and swap
capacity using the same repro steps.
To be more precise, for over an hour, I was unable to do anything and
of course I couldn't manually terminate the workload, so the only
option was to reset the power.
It may be a bit inconvenient that if something similar to this workload
were to happen due to a mistake or a runaway program, it would not
cause an OOM and would be inoperable for hours.
> > When demotion_enabled is true, if there is no free memory on the target
> > node during memory allocation, even if there is no swap device, demotion
> > may be able to move anonymous pages to a lower node and free up memory,
> > so more anonymous pages become candidates for eviction.
> > However, if free memory on the target node for demotion runs out,
> > various processes will perform similar operations in search of free
> > memory, wasting time on lock contention.
> >
> > Reducing lock contention or changing the eviction process is also an
> > interesting solution, but at present I have not come up with any workaround
> > other than disabling demotion when free memory on lower-level nodes is
> > exhausted.
>
> The lock contention seems like a symptom, not the cause. The cause
> appears to be that you're out of memory with no swap configured.
I understand that there are issues with the current patch, but I would
like to resolve the above issues even when swap is configured with
a better solution.
Powered by blists - more mailing lists