[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4a793133-6cb3-42d4-948f-84eae6fa7df3@suse.cz>
Date: Wed, 1 Oct 2025 16:59:02 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Suren Baghdasaryan <surenb@...gle.com>, Michal Hocko <mhocko@...e.com>,
Brendan Jackman <jackmanb@...gle.com>, Zi Yan <ziy@...dia.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Gregory Price <gourry@...rry.net>, Joshua Hahn <joshua.hahnjy@...il.com>
Subject: Re: [PATCH] mm: page_alloc: avoid kswapd thrashing due to NUMA
restrictions
On 9/19/25 6:21 PM, Johannes Weiner wrote:
> On NUMA systems without bindings, allocations check all nodes for free
> space, then wake up the kswapds on all nodes and retry. This ensures
> all available space is evenly used before reclaim begins. However,
> when one process or certain allocations have node restrictions, they
> can cause kswapds on only a subset of nodes to be woken up.
>
> Since kswapd hysteresis targets watermarks that are *higher* than
> needed for allocation, even *unrestricted* allocations can now get
> suckered onto such nodes that are already pressured. This ends up
> concentrating all allocations on them, even when there are idle nodes
> available for the unrestricted requests.
>
> This was observed with two numa nodes, where node0 is normal and node1
> is ZONE_MOVABLE to facilitate hotplugging: a kernel allocation wakes
> kswapd on node0 only (since node1 is not eligible); once kswapd0 is
> active, the watermarks hover between low and high, and then even the
> movable allocations end up on node0, only to be kicked out again;
> meanwhile node1 is empty and idle.
Is this because node1 is slow tier as Zi suggested, or we're talking
about allocations that are from node0's cpu, while allocations on
node1's cpu would be fine?
Also this sounds like something that ZONELIST_ORDER_ZONE handled until
it was removed. But it wouldn't help with the NUMA binding case.
> Similar behavior is possible when a process with NUMA bindings is
> causing selective kswapd wakeups.
>
> To fix this, on NUMA systems augment the (misleading) watermark test
> with a check for whether kswapd is already active during the first
> iteration through the zonelist. If this fails to place the request,
> kswapd must be running everywhere already, and the watermark test is
> good enough to decide placement.
Suppose kswapd finished reclaim already, so this check wouldn't kick in.
Wouldn't we be over-pressuring node0 still, just somewhat less?
> With this patch, unrestricted requests successfully make use of node1,
> even while kswapd is reclaiming node0 for restricted allocations.
>
> [gourry@...rry.net: don't retry if no kswapds were active]
> Signed-off-by: Gregory Price <gourry@...rry.net>
> Tested-by: Joshua Hahn <joshua.hahnjy@...il.com>
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> ---
> mm/page_alloc.c | 24 ++++++++++++++++++++++++
> 1 file changed, 24 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index cf38d499e045..ffdaf5e30b58 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3735,6 +3735,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
> struct pglist_data *last_pgdat = NULL;
> bool last_pgdat_dirty_ok = false;
> bool no_fallback;
> + bool skip_kswapd_nodes = nr_online_nodes > 1;
> + bool skipped_kswapd_nodes = false;
>
> retry:
> /*
> @@ -3797,6 +3799,19 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
> }
> }
>
> + /*
> + * If kswapd is already active on a node, keep looking
> + * for other nodes that might be idle. This can happen
> + * if another process has NUMA bindings and is causing
> + * kswapd wakeups on only some nodes. Avoid accidental
> + * "node_reclaim_mode"-like behavior in this case.
> + */
> + if (skip_kswapd_nodes &&
> + !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) {
> + skipped_kswapd_nodes = true;
> + continue;
> + }
> +
> cond_accept_memory(zone, order, alloc_flags);
>
> /*
> @@ -3888,6 +3903,15 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
> }
> }
>
> + /*
> + * If we skipped over nodes with active kswapds and found no
> + * idle nodes, retry and place anywhere the watermarks permit.
> + */
> + if (skip_kswapd_nodes && skipped_kswapd_nodes) {
> + skip_kswapd_nodes = false;
> + goto retry;
> + }
> +
> /*
> * It's possible on a UMA machine to get through all zones that are
Powered by blists - more mailing lists