[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260104102745.cfd4f6bd661e8e817afcdba8@linux-foundation.org>
Date: Sun, 4 Jan 2026 10:27:45 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Bing Jiao <bingjiao@...gle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, gourry@...rry.net,
longman@...hat.com, hannes@...xchg.org, mhocko@...nel.org,
roman.gushchin@...ux.dev, shakeel.butt@...ux.dev, muchun.song@...ux.dev,
tj@...nel.org, mkoutny@...e.com, david@...nel.org,
zhengqi.arch@...edance.com, lorenzo.stoakes@...cle.com,
axelrasmussen@...gle.com, chenridong@...weicloud.com, yuanchu@...gle.com,
weixugc@...gle.com, cgroups@...r.kernel.org, Akinobu Mita
<akinobu.mita@...il.com>
Subject: Re: [PATCH v4] mm/vmscan: fix demotion targets checks in
reclaim/demotion
On Sun, 4 Jan 2026 08:54:05 +0000 Bing Jiao <bingjiao@...gle.com> wrote:
> Fix two bugs in demote_folio_list() and can_demote() due to incorrect
> demotion target checks in reclaim/demotion.
Thanks.
> Commit 7d709f49babc ("vmscan,cgroup: apply mems_effective to reclaim")
> introduces the cpuset.mems_effective check and applies it to
> can_demote(). However:
>
> 1. It does not apply this check in demote_folio_list(), which leads
> to situations where pages are demoted to nodes that are
> explicitly excluded from the task's cpuset.mems.
>
> 2. It checks only the nodes in the immediate next demotion hierarchy
> and does not check all allowed demotion targets in can_demote().
> This can cause pages to never be demoted if the nodes in the next
> demotion hierarchy are not set in mems_effective.
>
> These bugs break resource isolation provided by cpuset.mems.
> This is visible from userspace because pages can either fail to be
> demoted entirely or are demoted to nodes that are not allowed
> in multi-tier memory systems.
>
> To address these bugs, update cpuset_node_allowed() and
> mem_cgroup_node_allowed() to return effective_mems, allowing directly
> logic-and operation against demotion targets. Also update can_demote()
> and demote_folio_list() accordingly.
>
> Bug 1 reproduction:
> Assume a system with 4 nodes, where nodes 0-1 are top-tier and
> nodes 2-3 are far-tier memory. All nodes have equal capacity.
>
> Test script:
> echo 1 > /sys/kernel/mm/numa/demotion_enabled
> mkdir /sys/fs/cgroup/test
> echo +cpuset > /sys/fs/cgroup/cgroup.subtree_control
> echo "0-2" > /sys/fs/cgroup/test/cpuset.mems
> echo $$ > /sys/fs/cgroup/test/cgroup.procs
> swapoff -a
> # Expectation: Should respect node 0-2 limit.
> # Observation: Node 3 shows significant allocation (MemFree drops)
> stress-ng --oomable --vm 1 --vm-bytes 150% --mbind 0,1
>
> Bug 2 reproduction:
> Assume a system with 6 nodes, where nodes 0-2 are top-tier,
> node 3 is a far-tier node, and nodes 4-5 are the farthest-tier nodes.
> All nodes have equal capacity.
>
> Test script:
> echo 1 > /sys/kernel/mm/numa/demotion_enabled
> mkdir /sys/fs/cgroup/test
> echo +cpuset > /sys/fs/cgroup/cgroup.subtree_control
> echo "0-2,4-5" > /sys/fs/cgroup/test/cpuset.mems
> echo $$ > /sys/fs/cgroup/test/cgroup.procs
> swapoff -a
> # Expectation: Pages are demoted to Nodes 4-5
> # Observation: No pages are demoted before oom.
> stress-ng --oomable --vm 1 --vm-bytes 150% --mbind 0,1,2
>
> Fixes: 7d709f49babc ("vmscan,cgroup: apply mems_effective to reclaim")
> Cc: <stable@...r.kernel.org>
We'll want to fix these things in 6.16.X and later, but you've prepared
this patch against "mm/vmscan: don't demote if there is not enough free
memory in the lower memory tier", which is presently under test/review
in mm.git's mm-unstable branch.
This seems to be incorrect ordering - this fix should go ahead of
Akinobu Mita's series "mm: fix oom-killer not being invoked when
demotion is enabled v2".
So can you please redo this patch against current mainline? And please
also review the "mm: fix oom-killer not being invoked when demotion is
enabled" series to ensure that things will work together nicely when
that time comes.
Powered by blists - more mailing lists