[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260114074049.229935-1-jiayuan.chen@linux.dev>
Date: Wed, 14 Jan 2026 15:40:34 +0800
From: Jiayuan Chen <jiayuan.chen@...ux.dev>
To: linux-mm@...ck.org,
shakeel.butt@...ux.dev
Cc: Jiayuan Chen <jiayuan.chen@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...nel.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Rapoport <rppt@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Yuanchu Xie <yuanchu@...gle.com>,
Wei Xu <weixugc@...gle.com>,
Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Brendan Jackman <jackmanb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Zi Yan <ziy@...dia.com>,
Qi Zheng <zhengqi.arch@...edance.com>,
Jiayuan Chen <jiayuan.chen@...pee.com>,
linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org
Subject: [PATCH v3 0/2] mm/vmscan: mitigate spurious kswapd_failures reset and add tracepoints
== Problem ==
We observed an issue in production on a multi-NUMA system where kswapd
runs endlessly, causing sustained heavy IO READ pressure across the
entire system.
The root cause is that direct reclaim triggered by cgroup memory.high
keeps resetting kswapd_failures to 0, even when the node cannot be
balanced. This prevents kswapd from ever stopping after reaching
MAX_RECLAIM_RETRIES.
```bash
bpftrace -e '
#include <linux/mmzone.h>
#include <linux/shrinker.h>
kprobe:balance_pgdat {
$pgdat = (struct pglist_data *)arg0;
if ($pgdat->kswapd_failures > 0) {
printf("[node %d] [%lu] kswapd end, kswapd_failures %d\n",
$pgdat->node_id, jiffies, $pgdat->kswapd_failures);
}
}
tracepoint:vmscan:mm_vmscan_direct_reclaim_end {
printf("[cpu %d] [%ul] reset kswapd_failures %d \n", cpu, jiffies,
args.nr_reclaimed)
}
'
```
The trace results showed that when kswapd_failures reaches 15, continuous
direct reclaim keeps resetting it to 0. This was accompanied by a flood of
kswapd_failures log entries, and shortly after, we observed massive
refaults occurring.
== Solution ==
Patch 1 fixes the issue by only resetting kswapd_failures when the node
is actually balanced. This introduces pgdat_try_reset_kswapd_failures()
as a wrapper that checks pgdat_balanced() before resetting.
Patch 2 extends the wrapper to track why kswapd_failures was reset,
adding tracepoints for better observability:
- mm_vmscan_reset_kswapd_failures: traces each reset with reason
- mm_vmscan_kswapd_reclaim_fail: traces each kswapd reclaim failure
---
v2 -> v3: https://lore.kernel.org/all/20251226080042.291657-1-jiayuan.chen@linux.dev/
- Add tracepoints for kswapd_failures reset and reclaim failure
- Expand commit message with test results
v1 -> v2: https://lore.kernel.org/all/20251222122022.254268-1-jiayuan.chen@linux.dev/
Jiayuan Chen (2):
mm/vmscan: mitigate spurious kswapd_failures reset from direct reclaim
mm/vmscan: add tracepoint and reason for kswapd_failures reset
include/linux/mmzone.h | 9 +++++++
include/trace/events/vmscan.h | 51 +++++++++++++++++++++++++++++++++++
mm/memory-tiers.c | 2 +-
mm/page_alloc.c | 2 +-
mm/vmscan.c | 33 ++++++++++++++++++++---
5 files changed, 91 insertions(+), 6 deletions(-)
--
2.43.0
Powered by blists - more mailing lists