[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230212110305.93670-1-zhengqi.arch@bytedance.com>
Date: Sun, 12 Feb 2023 19:03:05 +0800
From: Qi Zheng <zhengqi.arch@...edance.com>
To: akpm@...ux-foundation.org, vbabka@...e.cz
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Qi Zheng <zhengqi.arch@...edance.com>,
Teng Hu <huteng.ht@...edance.com>
Subject: [PATCH] mm: page_alloc: don't allocate page from memoryless nodes
In x86, numa_register_memblks() is only interested in
those nodes which have enough memory, so it skips over
all nodes with memory below NODE_MIN_SIZE (treated as
a memoryless node). Later on, we will initialize these
memoryless nodes (allocate pgdat in free_area_init()
and build zonelist etc), and will online these nodes
in init_cpu_to_node() and init_gi_nodes().
After boot, these memoryless nodes are in N_ONLINE
state but not in N_MEMORY state. But we can still allocate
pages from these memoryless nodes.
In SLUB, we only process nodes in the N_MEMORY state,
such as allocating their struct kmem_cache_node. So if
we allocate a page from the memoryless node above to
SLUB, the struct kmem_cache_node of the node corresponding
to this page is NULL, which will cause panic.
For example, if we use qemu to start a two numa node kernel,
one of the nodes has 2M memory (less than NODE_MIN_SIZE),
and the other node has 2G, then we will encounter the
following panic:
[ 0.149844] BUG: kernel NULL pointer dereference, address: 0000000000000000
[ 0.150783] #PF: supervisor write access in kernel mode
[ 0.151488] #PF: error_code(0x0002) - not-present page
<...>
[ 0.156056] RIP: 0010:_raw_spin_lock_irqsave+0x22/0x40
<...>
[ 0.169781] Call Trace:
[ 0.170159] <TASK>
[ 0.170448] deactivate_slab+0x187/0x3c0
[ 0.171031] ? bootstrap+0x1b/0x10e
[ 0.171559] ? preempt_count_sub+0x9/0xa0
[ 0.172145] ? kmem_cache_alloc+0x12c/0x440
[ 0.172735] ? bootstrap+0x1b/0x10e
[ 0.173236] bootstrap+0x6b/0x10e
[ 0.173720] kmem_cache_init+0x10a/0x188
[ 0.174240] start_kernel+0x415/0x6ac
[ 0.174738] secondary_startup_64_no_verify+0xe0/0xeb
[ 0.175417] </TASK>
[ 0.175713] Modules linked in:
[ 0.176117] CR2: 0000000000000000
In addition, we can also encountered this panic in the actual
production environment. We set up a 2c2g container with two
numa nodes, and then reserved 128M for kdump, and then we
can encountered the above panic in the kdump kernel.
To fix it, we can filter memoryless nodes when allocating
pages.
Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
Reported-by: Teng Hu <huteng.ht@...edance.com>
---
mm/page_alloc.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 588555754601..b9cce56f4e21 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4188,6 +4188,11 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
(alloc_flags & ALLOC_CPUSET) &&
!__cpuset_zone_allowed(zone, gfp_mask))
continue;
+
+ /* Don't allocate page from memoryless nodes. */
+ if (!node_state((zone_to_nid(zone)), N_MEMORY))
+ continue;
+
/*
* When allocating a page cache page for writing, we
* want to get it from a node that is within its dirty
--
2.20.1
Powered by blists - more mailing lists