[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aXobeWtnJVrTmxlV@gourry-fedora-PF4VCD3F>
Date: Wed, 28 Jan 2026 09:21:45 -0500
From: Gregory Price <gourry@...rry.net>
To: Michal Hocko <mhocko@...e.com>
Cc: Akinobu Mita <akinobu.mita@...il.com>, linux-cxl@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, axelrasmussen@...gle.com,
yuanchu@...gle.com, weixugc@...gle.com, hannes@...xchg.org,
david@...nel.org, zhengqi.arch@...edance.com,
shakeel.butt@...ux.dev, lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org,
surenb@...gle.com, bingjiao@...gle.com
Subject: Re: [PATCH v3 3/3] mm/vmscan: don't demote if there is not enough
free memory in the lower memory tier
On Wed, Jan 28, 2026 at 10:56:44AM +0100, Michal Hocko wrote:
> > .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
> > __GFP_NOMEMALLOC | GFP_NOWAIT,
> > };
>
> This will trigger kswapd so there will be background reclaim demoting
> from those lower tiers.
>
given the node is full kswapd will be running, but the above line masks
~__GFP_RECLAIM so it's not supposed to trigger either reclaim path.
> > Any chance you are using hugetlb on this system? This looks like a
> > clear bug, but it may not be what you're experiencing.
>
> Hugetlb pages are not sitting on LRU lists so they are not participating
> in the demotion.
>
I noted in the v4 thread (responded there too) this was the case.
https://lore.kernel.org/linux-mm/aXksUiwYGwad5JvC@gourry-fedora-PF4VCD3F/
But since then we found another path through this code that adds
reclaim back on as well - and i wouldn't be surprised to find more.
the bigger issue is that this fix can cause inversions in transient
pressure situations - and in fact the current code will cause inversions
instead of waiting for reclaim to clear out lower nodes.
The reality is this code probably needs a proper look and detangling.
This has been on my back-burner for a while - i've wanted to sink the
actual demotion code into memory-tiers.c and provide something like:
... mt_demote_folios(src_nid, folio_list)
{
/* apply some demotion policy here */
}
~Gregory
Powered by blists - more mailing lists