lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aWcrVRT_RjxdjoN7@hyeyoo>
Date: Wed, 14 Jan 2026 14:36:21 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: Chris Mason <clm@...a.com>
Cc: Qi Zheng <qi.zheng@...ux.dev>, hannes@...xchg.org, hughd@...gle.com,
        mhocko@...e.com, roman.gushchin@...ux.dev, shakeel.butt@...ux.dev,
        muchun.song@...ux.dev, david@...hat.com, lorenzo.stoakes@...cle.com,
        ziy@...dia.com, baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com,
        npache@...hat.com, ryan.roberts@....com, dev.jain@....com,
        baohua@...nel.org, lance.yang@...ux.dev, akpm@...ux-foundation.org,
        richard.weiyang@...il.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        Qi Zheng <zhengqi.arch@...edance.com>
Subject: Re: [PATCH v6 4/4] mm: thp: reparent the split queue during memcg
 offline

On Tue, Jan 13, 2026 at 02:00:43PM -0800, Chris Mason wrote:
> On Mon, 10 Nov 2025 16:17:58 +0800 Qi Zheng <qi.zheng@...ux.dev> wrote:
> 
> > From: Qi Zheng <zhengqi.arch@...edance.com>
> > 
> > Similar to list_lru, the split queue is relatively independent and does
> > not need to be reparented along with objcg and LRU folios (holding
> > objcg lock and lru lock). So let's apply the similar mechanism as list_lru
> > to reparent the split queue separately when memcg is offine.
> > 
> 
> Hi everyone,
> 
> I'm running all of the MM commits since v6.18 through the AI patch review
> tools, and this one was flagged.  I'm much less confident in this report,
> AI tends to get these races wrong:
> 
> [AI review output]
> 
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index b0c6a4635c67b..cc6db20d7dca0 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -1775,6 +1775,12 @@ static inline void count_objcg_events(struct obj_cgroup *objcg,
> >  bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid);
> >
> >  void mem_cgroup_show_protected_memory(struct mem_cgroup *memcg);
> > +
> > +static inline bool memcg_is_dying(struct mem_cgroup *memcg)
> > +{
> > +	return memcg ? css_is_dying(&memcg->css) : false;
> > +}
> 
> [ ... ]
> 
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 13684e5376e82..d17d3810a882c 100644
> 
> [ ... ]
> 
> > @@ -1129,8 +1140,14 @@ split_queue_lock_irqsave(int nid, struct mem_cgroup *memcg, unsigned long *flags
> >  {
> >  	struct deferred_split *queue;
> >
> > +retry:
> >  	queue = memcg_split_queue(nid, memcg);
> >  	spin_lock_irqsave(&queue->split_queue_lock, *flags);
> > +	if (unlikely(memcg_is_dying(memcg))) {
> > +		spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
> > +		memcg = parent_mem_cgroup(memcg);
> > +		goto retry;
> > +	}
> >
> >  	return queue;
> >  }
> 
> There appears to be a race window that can cause split_queue_len counter
> mismatch. Between the time CSS_DYING is set (in kill_css()) and when
> reparent_deferred_split_queue() actually runs (asynchronously in
> css_offline), folios can be unqueued via __folio_unqueue_deferred_split().
> 
> In this scenario:
>   __folio_unqueue_deferred_split()
>     -> folio_split_queue_lock_irqsave()
>        -> split_queue_lock_irqsave() sees memcg_is_dying() == true
>        -> returns parent's queue (locked)
>     -> ds_queue->split_queue_len-- decrements parent's counter
>     -> list_del_init() removes folio from child's list
> 
> The folio is correctly removed from the child's list (list_del_init is
> self-referential), but the wrong queue's counter is decremented.

Good point. Sounds pretty possible to me?

I don't think there's anything that prevents it from unqueued
before it's reparented.

> If the parent queue was empty (split_queue_len == 0), this would cause an
> unsigned long underflow to ULONG_MAX.

Although the accounting mismatch will only persist until
reparent_deferred_split_queue() reparents the deferred split queue.

Ideally this should be fixed by checking if the folio has been
reparented after acquiring the split queue lock, but since we don't reparent
LRU pages yet ... do we need a band-aid before then?

Do we want to have is_dying property in the split queue as it was in v2? [1]
[1] https://lore.kernel.org/linux-mm/55370bda7b2df617033ac12116c1712144bb7591.1758618527.git.zhengqi.arch@bytedance.com

> Could this be addressed by checking that the folio is actually in the
> queue being unlocked, perhaps by verifying the list head matches before
> decrementing split_queue_len?

> [ ... ]
> 
> > @@ -3920,6 +3920,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
> >  	zswap_memcg_offline_cleanup(memcg);
> >
> >  	memcg_offline_kmem(memcg);
> > +	reparent_deferred_split_queue(memcg);
> 
> The race window exists because CSS_DYING is set in kill_css() before
> css_offline() callbacks run asynchronously. The comment in split_queue_lock
> mentions that THPs will be "hidden from the shrinker side" during this
> period, but does not address the counter accounting issue when folios
> are unqueued.

-- 
Cheers,
Harry / Hyeonggon

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ