[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260113220046.2274684-1-clm@meta.com>
Date: Tue, 13 Jan 2026 14:00:43 -0800
From: Chris Mason <clm@...a.com>
To: Qi Zheng <qi.zheng@...ux.dev>
CC: Chris Mason <clm@...a.com>, <hannes@...xchg.org>, <hughd@...gle.com>,
<mhocko@...e.com>, <roman.gushchin@...ux.dev>,
<shakeel.butt@...ux.dev>, <muchun.song@...ux.dev>, <david@...hat.com>,
<lorenzo.stoakes@...cle.com>, <ziy@...dia.com>, <harry.yoo@...cle.com>,
<baolin.wang@...ux.alibaba.com>, <Liam.Howlett@...cle.com>,
<npache@...hat.com>, <ryan.roberts@....com>, <dev.jain@....com>,
<baohua@...nel.org>, <lance.yang@...ux.dev>,
<akpm@...ux-foundation.org>, <richard.weiyang@...il.com>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<cgroups@...r.kernel.org>, Qi Zheng <zhengqi.arch@...edance.com>
Subject: Re: [PATCH v6 4/4] mm: thp: reparent the split queue during memcg offline
On Mon, 10 Nov 2025 16:17:58 +0800 Qi Zheng <qi.zheng@...ux.dev> wrote:
> From: Qi Zheng <zhengqi.arch@...edance.com>
>
> Similar to list_lru, the split queue is relatively independent and does
> not need to be reparented along with objcg and LRU folios (holding
> objcg lock and lru lock). So let's apply the similar mechanism as list_lru
> to reparent the split queue separately when memcg is offine.
>
Hi everyone,
I'm running all of the MM commits since v6.18 through the AI patch review
tools, and this one was flagged. I'm much less confident in this report,
AI tends to get these races wrong:
[AI review output]
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index b0c6a4635c67b..cc6db20d7dca0 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1775,6 +1775,12 @@ static inline void count_objcg_events(struct obj_cgroup *objcg,
> bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid);
>
> void mem_cgroup_show_protected_memory(struct mem_cgroup *memcg);
> +
> +static inline bool memcg_is_dying(struct mem_cgroup *memcg)
> +{
> + return memcg ? css_is_dying(&memcg->css) : false;
> +}
[ ... ]
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 13684e5376e82..d17d3810a882c 100644
[ ... ]
> @@ -1129,8 +1140,14 @@ split_queue_lock_irqsave(int nid, struct mem_cgroup *memcg, unsigned long *flags
> {
> struct deferred_split *queue;
>
> +retry:
> queue = memcg_split_queue(nid, memcg);
> spin_lock_irqsave(&queue->split_queue_lock, *flags);
> + if (unlikely(memcg_is_dying(memcg))) {
> + spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
> + memcg = parent_mem_cgroup(memcg);
> + goto retry;
> + }
>
> return queue;
> }
There appears to be a race window that can cause split_queue_len counter
mismatch. Between the time CSS_DYING is set (in kill_css()) and when
reparent_deferred_split_queue() actually runs (asynchronously in
css_offline), folios can be unqueued via __folio_unqueue_deferred_split().
In this scenario:
__folio_unqueue_deferred_split()
-> folio_split_queue_lock_irqsave()
-> split_queue_lock_irqsave() sees memcg_is_dying() == true
-> returns parent's queue (locked)
-> ds_queue->split_queue_len-- decrements parent's counter
-> list_del_init() removes folio from child's list
The folio is correctly removed from the child's list (list_del_init is
self-referential), but the wrong queue's counter is decremented. If the
parent queue was empty (split_queue_len == 0), this would cause an
unsigned long underflow to ULONG_MAX.
Could this be addressed by checking that the folio is actually in the
queue being unlocked, perhaps by verifying the list head matches before
decrementing split_queue_len?
[ ... ]
> @@ -3920,6 +3920,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
> zswap_memcg_offline_cleanup(memcg);
>
> memcg_offline_kmem(memcg);
> + reparent_deferred_split_queue(memcg);
The race window exists because CSS_DYING is set in kill_css() before
css_offline() callbacks run asynchronously. The comment in split_queue_lock
mentions that THPs will be "hidden from the shrinker side" during this
period, but does not address the counter accounting issue when folios
are unqueued.
Powered by blists - more mailing lists