[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aXlKOxGGI9zne8sl@google.com>
Date: Tue, 27 Jan 2026 23:28:59 +0000
From: Bing Jiao <bingjiao@...gle.com>
To: Gregory Price <gourry@...rry.net>
Cc: Akinobu Mita <akinobu.mita@...il.com>, linux-cxl@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, axelrasmussen@...gle.com,
yuanchu@...gle.com, weixugc@...gle.com, hannes@...xchg.org,
david@...nel.org, mhocko@...nel.org, zhengqi.arch@...edance.com,
shakeel.butt@...ux.dev, lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org,
surenb@...gle.com
Subject: Re: [PATCH v3 3/3] mm/vmscan: don't demote if there is not enough
free memory in the lower memory tier
On Tue, Jan 27, 2026 at 03:24:36PM -0500, Gregory Price wrote:
> On Sat, Jan 10, 2026 at 10:55:02PM +0900, Akinobu Mita wrote:
> > Since can_reclaim_anon_pages() checks whether there is free space on the swap
> > device before checking with can_demote(), I think the negative impact of this
> > change will be small. However, since I have not been able to confirm the
> > behavior when a swap device is available, I would like to correctly understand
> > the impact.
>
> Something else is going on here
>
> See demote_folio_list and alloc_demote_folio
>
> static unsigned int demote_folio_list(struct list_head *demote_folios,
> struct pglist_data *pgdat,
> struct mem_cgroup *memcg)
> {
> struct migration_target_control mtc = {
> */
> .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
> __GFP_NOMEMALLOC | GFP_NOWAIT,
> };
> }
>
> static struct folio *alloc_demote_folio(struct folio *src,
> unsigned long private)
> {
> /* Only attempt to demote to the preferred node */
> mtc->nmask = NULL;
> mtc->gfp_mask |= __GFP_THISNODE;
> dst = alloc_migration_target(src, (unsigned long)mtc);
> if (dst)
> return dst;
>
> /* Now attempt to demote to any node in the lower tier */
> mtc->gfp_mask &= ~__GFP_THISNODE;
> mtc->nmask = allowed_mask;
> return alloc_migration_target(src, (unsigned long)mtc);
> }
>
>
> /*
> * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
> */
>
>
> You basically shouldn't be hitting any reclaim behavior at all, and if
> the target nodes are actually under various watermarks, you should be
> getting allocation failures and quick-outs from the demotion logic.
Hi, Gregory, hope you are doing well.
I observed that during the allocation of a large folio,
alloc_migration_target() cleans __GFP_RECLAIM but subsequently applies
GFP_TRANSHUGE. Given that GFP_TRANSHUGE includes __GFP_DIRECT_RECLAIM,
I am wondering if this triggers a form of reclamation that should be
avoided during demotion.
struct folio *alloc_migration_target(struct folio *src, unsigned long private)
...
if (folio_test_large(src)) {
/*
* clear __GFP_RECLAIM to make the migration callback
* consistent with regular THP allocations.
*/
gfp_mask &= ~__GFP_RECLAIM;
gfp_mask |= GFP_TRANSHUGE;
order = folio_order(src);
}
#define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM)
Best,
Bing
Powered by blists - more mailing lists