[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZS+2qqjEO5/867br@gmail.com>
Date: Wed, 18 Oct 2023 12:42:50 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Mike Rapoport <rppt@...nel.org>
Cc: x86@...nel.org, Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Michal Hocko <mhocko@...e.com>,
Peter Zijlstra <peterz@...radead.org>,
Qi Zheng <zhengqi.arch@...edance.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH v2] x86/mm: Drop 4MB restriction on minimal NUMA node memory
size
* Mike Rapoport <rppt@...nel.org> wrote:
> From: "Mike Rapoport (IBM)" <rppt@...nel.org>
>
> Qi Zheng reports crashes in a production environment and provides a
> simplified example as a reproducer:
>
> For example, if we use qemu to start a two NUMA node kernel,
> one of the nodes has 2M memory (less than NODE_MIN_SIZE),
> and the other node has 2G, then we will encounter the
> following panic:
>
> [ 0.149844] BUG: kernel NULL pointer dereference, address: 0000000000000000
> [ 0.150783] #PF: supervisor write access in kernel mode
> [ 0.151488] #PF: error_code(0x0002) - not-present page
> <...>
> [ 0.156056] RIP: 0010:_raw_spin_lock_irqsave+0x22/0x40
> <...>
> [ 0.169781] Call Trace:
> [ 0.170159] <TASK>
> [ 0.170448] deactivate_slab+0x187/0x3c0
> [ 0.171031] ? bootstrap+0x1b/0x10e
> [ 0.171559] ? preempt_count_sub+0x9/0xa0
> [ 0.172145] ? kmem_cache_alloc+0x12c/0x440
> [ 0.172735] ? bootstrap+0x1b/0x10e
> [ 0.173236] bootstrap+0x6b/0x10e
> [ 0.173720] kmem_cache_init+0x10a/0x188
> [ 0.174240] start_kernel+0x415/0x6ac
> [ 0.174738] secondary_startup_64_no_verify+0xe0/0xeb
> [ 0.175417] </TASK>
> [ 0.175713] Modules linked in:
> [ 0.176117] CR2: 0000000000000000
>
> The crashes happen because of inconsistency between nodemask that has
> nodes with less than 4MB as memoryless and the actual memory fed into
> core mm.
Presumably the core MM got fixed too to not just crash, but provide some
sort of warning?
> The commit 9391a3f9c7f1 ("[PATCH] x86_64: Clear more state when ignoring
> empty node in SRAT parsing") that introduced minimal size of a NUMA node
> does not explain why a node size cannot be less than 4MB and what boot
> failures this restriction might fix.
>
> Since then a lot has changed and core mm won't confuse badly about small
> node sizes.
Core MM won't get confused ... other than by the above weird Qemu topology,
to which it responds with a ... NULL pointer dereference?
Seems quite close to the literal definition of 'get confused badly' to me,
and doesn't give me the warm fuzzy feeling that giving the core MM even
*more* weird topologies is super safe ... :-/
> Drop the limitation for the minimal node size.
While I agree with dropping the limitation, and I agree that 9391a3f9c7f1
should have provided more of a justification, I believe a core MM fix is in
order as well, for it to not crash. [ If it's fixed upstream already,
please reference the relevant commit ID. ]
Also, the changelog spelling & general presentation were quite low quality
- I've fixed it up a bit below, please carry this version going forward.
Please spell-check your patches before sending out Nth versions of it,
maybe maintainers are skipping them for a reason!
Thanks,
Ingo
=================>
From: "Mike Rapoport (IBM)" <rppt@...nel.org>
Date: Tue, 17 Oct 2023 09:22:15 +0300
Subject: [PATCH] x86/mm: Drop 4MB restriction on minimal NUMA node memory size
Qi Zheng reported crashes in a production environment and provided a
simplified example as a reproducer:
| For example, if we use qemu to start a two NUMA node kernel,
| one of the nodes has 2M memory (less than NODE_MIN_SIZE),
| and the other node has 2G, then we will encounter the
| following panic:
|
| BUG: kernel NULL pointer dereference, address: 0000000000000000
| <...>
| RIP: 0010:_raw_spin_lock_irqsave+0x22/0x40
| <...>
| Call Trace:
| <TASK>
| deactivate_slab()
| bootstrap()
| kmem_cache_init()
| start_kernel()
| secondary_startup_64_no_verify()
The crashes happen because of inconsistency between the nodemask that
has nodes with less than 4MB as memoryless, and the actual memory fed
into the core mm.
The commit:
9391a3f9c7f1 ("[PATCH] x86_64: Clear more state when ignoring empty node in SRAT parsing")
... that introduced minimal size of a NUMA node does not explain why
a node size cannot be less than 4MB and what boot failures this
restriction might fix.
In the 17 years since then a lot has changed and core mm won't get
confused about small node sizes.
Drop the limitation for the minimal node size.
[ mingo: Improved changelog clarity. ]
Reported-by: Qi Zheng <zhengqi.arch@...edance.com>
Signed-off-by: Mike Rapoport (IBM) <rppt@...nel.org>
Not-Yet-Signed-off-by: Ingo Molnar <mingo@...nel.org>
Acked-by: David Hildenbrand <david@...hat.com>
Acked-by: Michal Hocko <mhocko@...e.com>
Link: https://lore.kernel.org/all/20230212110305.93670-1-zhengqi.arch@bytedance.com/
---
arch/x86/include/asm/numa.h | 7 -------
arch/x86/mm/numa.c | 7 -------
2 files changed, 14 deletions(-)
diff --git a/arch/x86/include/asm/numa.h b/arch/x86/include/asm/numa.h
index e3bae2b60a0d..ef2844d69173 100644
--- a/arch/x86/include/asm/numa.h
+++ b/arch/x86/include/asm/numa.h
@@ -12,13 +12,6 @@
#define NR_NODE_MEMBLKS (MAX_NUMNODES*2)
-/*
- * Too small node sizes may confuse the VM badly. Usually they
- * result from BIOS bugs. So dont recognize nodes as standalone
- * NUMA entities that have less than this amount of RAM listed:
- */
-#define NODE_MIN_SIZE (4*1024*1024)
-
extern int numa_off;
/*
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index c01c5506fd4a..aa39d678fe81 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -602,13 +602,6 @@ static int __init numa_register_memblks(struct numa_meminfo *mi)
if (start >= end)
continue;
- /*
- * Don't confuse VM with a node that doesn't have the
- * minimum amount of memory:
- */
- if (end && (end - start) < NODE_MIN_SIZE)
- continue;
-
alloc_node_data(nid);
}
Powered by blists - more mailing lists