[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <062402c3-e640-4578-b1b1-318a0a728a93@neon.tech>
Date: Fri, 13 Jun 2025 21:11:48 +0100
From: Em Sharnoff <sharnoff@...n.tech>
To: linux-kernel@...r.kernel.org, x86@...nel.org, linux-mm@...ck.org
Cc: Ingo Molnar <mingo@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
Dave Hansen <dave.hansen@...ux.intel.com>, Andy Lutomirski
<luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>, Borislav Petkov <bp@...en8.de>,
"Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
Oleg Vasilev <oleg@...n.tech>, Arthur Petukhovsky <arthur@...n.tech>,
Stefan Radig <stefan@...n.tech>, Misha Sakhnov <misha@...n.tech>
Subject: [PATCH v4 3/4] x86/mm: Handle alloc failure in phys_*_init()
During memory hotplug, allocation failures in phys_*_init() aren't
handled, which results in a null pointer dereference if they occur.
This patch depends on the previous patch ("x86/mm: Allow error returns
from phys_*_init()").
Signed-off-by: Em Sharnoff <sharnoff@...n.tech>
---
Changelog:
- v4: Split this patch out from the error handling changes
---
arch/x86/mm/init_64.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b18ab2dcc799..a585daa76d69 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -572,6 +572,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
}
pte = alloc_low_page();
+ if (!pte)
+ return -ENOMEM;
paddr_last = phys_pte_init(pte, paddr, paddr_end, new_prot, init);
spin_lock(&init_mm.page_table_lock);
@@ -664,6 +666,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
}
pmd = alloc_low_page();
+ if (!pmd)
+ return -ENOMEM;
ret = phys_pmd_init(pmd, paddr, paddr_end,
page_size_mask, prot, init);
@@ -726,6 +730,8 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
}
pud = alloc_low_page();
+ if (!pud)
+ return -ENOMEM;
ret = phys_pud_init(pud, paddr, __pa(vaddr_end),
page_size_mask, prot, init);
@@ -774,6 +780,8 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
}
p4d = alloc_low_page();
+ if (!p4d)
+ return -ENOMEM;
ret = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
page_size_mask, prot, init);
--
2.39.5
Powered by blists - more mailing lists