[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <136b4999-1c05-4d30-9521-d621196e6ba7@neon.tech>
Date: Fri, 11 Jul 2025 17:25:47 +0100
From: Em Sharnoff <sharnoff@...n.tech>
To: linux-kernel@...r.kernel.org, x86@...nel.org, linux-mm@...ck.org
Cc: Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, "H. Peter Anvin" <hpa@...or.com>,
"Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
Oleg Vasilev <oleg@...n.tech>, Arthur Petukhovsky <arthur@...n.tech>,
Stefan Radig <stefan@...n.tech>, Misha Sakhnov <misha@...n.tech>
Subject: [PATCH v5 3/4] x86/mm: Handle alloc failure in phys_*_init()
During memory hotplug, allocation failures in phys_*_init() aren't
handled, which results in a null pointer dereference if they occur.
This patch depends on the previous patch ("x86/mm: Allow error returns
from phys_*_init()").
Signed-off-by: Em Sharnoff <sharnoff@...n.tech>
---
Changelog:
- v4: Split this patch out from the error handling changes
---
arch/x86/mm/init_64.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index ca71eaec1db5..eced309a4015 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -573,6 +573,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
}
pte = alloc_low_page();
+ if (!pte)
+ return -ENOMEM;
paddr_last = phys_pte_init(pte, paddr, paddr_end, new_prot, init);
spin_lock(&init_mm.page_table_lock);
@@ -665,6 +667,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
}
pmd = alloc_low_page();
+ if (!pmd)
+ return -ENOMEM;
ret = phys_pmd_init(pmd, paddr, paddr_end,
page_size_mask, prot, init);
@@ -727,6 +731,8 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
}
pud = alloc_low_page();
+ if (!pud)
+ return -ENOMEM;
ret = phys_pud_init(pud, paddr, __pa(vaddr_end),
page_size_mask, prot, init);
@@ -775,6 +781,8 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
}
p4d = alloc_low_page();
+ if (!p4d)
+ return -ENOMEM;
ret = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
page_size_mask, prot, init);
--
2.39.5
Powered by blists - more mailing lists