[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <42e6103fb07fca398f0942c7c41129ffcce90dc6@linux.dev>
Date: Tue, 23 Dec 2025 01:51:32 +0000
From: "Jiayuan Chen" <jiayuan.chen@...ux.dev>
To: "Andrew Morton" <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, "Jiayuan Chen" <jiayuan.chen@...pee.com>, "Johannes
Weiner" <hannes@...xchg.org>, "David Hildenbrand" <david@...nel.org>,
"Michal Hocko" <mhocko@...nel.org>, "Qi Zheng"
<zhengqi.arch@...edance.com>, "Shakeel Butt" <shakeel.butt@...ux.dev>,
"Lorenzo Stoakes" <lorenzo.stoakes@...cle.com>, "Axel Rasmussen"
<axelrasmussen@...gle.com>, "Yuanchu Xie" <yuanchu@...gle.com>, "Wei Xu"
<weixugc@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] mm/vmscan: mitigate spurious kswapd_failures reset
from direct reclaim
December 23, 2025 at 02:29, "Andrew Morton" <akpm@...ux-foundation.org mailto:akpm@...ux-foundation.org?to=%22Andrew%20Morton%22%20%3Cakpm%40linux-foundation.org%3E > wrote:
Hi Andrew,
Thanks for the review.
>
> On Mon, 22 Dec 2025 20:20:21 +0800 Jiayuan Chen <jiayuan.chen@...ux.dev> wrote:
>
> >
> > From: Jiayuan Chen <jiayuan.chen@...pee.com>
> >
> > When kswapd fails to reclaim memory, kswapd_failures is incremented.
> > Once it reaches MAX_RECLAIM_RETRIES, kswapd stops running to avoid
> > futile reclaim attempts. However, any successful direct reclaim
> > unconditionally resets kswapd_failures to 0, which can cause problems.
> >
> > We observed an issue in production on a multi-NUMA system where a
> > process allocated large amounts of anonymous pages on a single NUMA
> > node, causing its watermark to drop below high and evicting most file
> > pages:
> >
> > $ numastat -m
> > Per-node system memory usage (in MBs):
> > Node 0 Node 1 Total
> > --------------- --------------- ---------------
> > MemTotal 128222.19 127983.91 256206.11
> > MemFree 1414.48 1432.80 2847.29
> > MemUsed 126807.71 126551.11 252358.82
> > SwapCached 0.00 0.00 0.00
> > Active 29017.91 25554.57 54572.48
> > Inactive 92749.06 95377.00 188126.06
> > Active(anon) 28998.96 23356.47 52355.43
> > Inactive(anon) 92685.27 87466.11 180151.39
> > Active(file) 18.95 2198.10 2217.05
> > Inactive(file) 63.79 7910.89 7974.68
> >
> > With swap disabled, only file pages can be reclaimed. When kswapd is
> > woken (e.g., via wake_all_kswapds()), it runs continuously but cannot
> > raise free memory above the high watermark since reclaimable file pages
> > are insufficient. Normally, kswapd would eventually stop after
> > kswapd_failures reaches MAX_RECLAIM_RETRIES.
> >
> > However, pods on this machine have memory.high set in their cgroup.
> >
> What's a "pod"?
A pod is a Kubernetes container. Sorry for the unclear terminology.
> >
> > Business processes continuously trigger the high limit, causing frequent
> > direct reclaim that keeps resetting kswapd_failures to 0. This prevents
> > kswapd from ever stopping.
> >
> > The result is that kswapd runs endlessly, repeatedly evicting the few
> > remaining file pages which are actually hot. These pages constantly
> > refault, generating sustained heavy IO READ pressure.
> >
> Yes, not good.
>
> >
> > Fix this by only resetting kswapd_failures from direct reclaim when the
> > node is actually balanced. This prevents direct reclaim from keeping
> > kswapd alive when the node cannot be balanced through reclaim alone.
> >
> > ...
> >
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2648,6 +2648,15 @@ static bool can_age_anon_pages(struct lruvec *lruvec,
> > lruvec_memcg(lruvec));
> > }
> >
> > +static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx);
> >
> Forward declaration could be avoided by relocating pgdat_balanced(),
> although the patch will get a lot larger.
Thanks for pointing this out.
> >
> > +static inline void reset_kswapd_failures(struct pglist_data *pgdat,
> > + struct scan_control *sc)
> >
> It would be nice to have a nice comment explaining why this is here.
> Why are we checking for balanced?
You're right, a comment explaining the rationale would be helpful.
> >
> > +{
> > + if (!current_is_kswapd() &&
> >
> kswapd can no longer clear ->kswapd_failures. What's the thinking here?
Good catch. My original thinking was that kswapd already checks pgdat_balanced()
in its own path after successful reclaim, so I wanted to avoid redundant checks.
But looking at the code again, this is indeed a bug - kswapd's reclaim path does
need to clear kswapd_failures on successful reclaim.
Powered by blists - more mailing lists