[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGWkznHr46jSDNPMxk684QYcRTJnNk5BOhuMQoRvCxkyEBKdZQ@mail.gmail.com>
Date: Thu, 22 Jan 2026 10:43:01 +0800
From: Zhaoyang Huang <huangzhaoyang@...il.com>
To: Joshua Hahn <joshua.hahnjy@...il.com>
Cc: "zhaoyang.huang" <zhaoyang.huang@...soc.com>, Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>, Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, steve.kang@...soc.com
Subject: Re: [PATCH] mm: bail out when meet the goal of proactive memcg reclaim
On Wed, Jan 21, 2026 at 11:54 PM Joshua Hahn <joshua.hahnjy@...il.com> wrote:
>
> On Wed, 21 Jan 2026 17:06:20 +0800 "zhaoyang.huang" <zhaoyang.huang@...soc.com> wrote:
>
> > From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
>
> Hi Zhaoyang, I hope that you are doing well!
>
> > The proactive memcg reclaim defines the specific target mem cgroup
> > as well as a certain number of memories, which is different to
> > the kswapd and direct reclaim that need to keep the fairness among
> > cgroups. This commit would like to introduce a criteria to have
> > proactive reclaim bail out when target mem cgroup could meet the goal
> > via its own lruvec, which means the reclaim would also go through the
> > whole tree if the iter start on the desendants.
>
> I think the motivation makes sense. If a user-initiated reclaim has already
> reclaimed the amount of memory that was requested, there's probably no
> need to continue traversing the tree and reclaiming more, and we can do
> an early exit.
>
> We also don't need to worry about fairness among the target memcg's
> descendants, because if the user cared about that then they would have
> specified one of the descendants as the target_memccg.
>
> But I would also like to go back to what Michal also pointed out in his
> reply -- why include target_memcg == memcg? Wouldn't we want this early
> bail out to happen down the memcg hierarchy as well?
>
> And most importantly, have you seen any issues in real-life as a result of
> this? Is writing to memory.reclaim too slow?
>
> Please let me know what you think. I hope you have a great day!
> Joshua
Please correct me if I am wrong
root
| \
A B
| \
| \
AA AB
| \
| \
AA1 AA2
Let's take the above hierarchy for example, assuming proactive
reclaiming launched on AA, there are two scenarios
1. AA has folios charged( 'echo +memory > AA.subtree_control' happens
after a certain time of running), which will enlarge the reclaimed
memory by many times since it reclaims on itself and all descendants
without check nr_reclaimed until the traversal finished. This is what
this commit aims at to have the reclaiming bail out when the goal is
reached.
2. AA has NO folios charged. Keep the fairness scanning among the descendants.
PS: reclaiming launched on bottom level wouldn't be affected as there
is no traversal at all.
>
> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> > ---
> > mm/vmscan.c | 11 +++++++++--
> > 1 file changed, 9 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 670fe9fae5ba..5dcca4559b18 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -6028,8 +6028,15 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
> > sc->nr_scanned - scanned,
> > sc->nr_reclaimed - reclaimed);
> >
> > - /* If partial walks are allowed, bail once goal is reached */
> > - if (partial && sc->nr_reclaimed >= sc->nr_to_reclaim) {
> > + /* If partial walks are allowed, or proactive reclaim where
> > + * the target memcg is clearly defined that could let us ignore
> > + * the fairness thing, bail once goal is reached.
> > + * note: for proactive reclaim, the criteria make sense only
> > + * when target_memcg has both of descendant groups and folios
> > + * charged. Other wise, walk the whole tree under target_memcg.
> > + */
> > + if ((partial || (sc->proactive && target_memcg == memcg)) &&
> > + sc->nr_reclaimed >= sc->nr_to_reclaim) {
> > mem_cgroup_iter_break(target_memcg, memcg);
> > break;
> > }
> > --
> > 2.25.1
Powered by blists - more mailing lists