[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <92894a9b-3088-4cf7-83bb-ea7382a35d82@neon.tech>
Date: Tue, 10 Jun 2025 11:17:36 +0100
From: Em Sharnoff <sharnoff@...n.tech>
To: linux-kernel@...r.kernel.org, x86@...nel.org, linux-mm@...ck.org
Cc: Ingo Molnar <mingo@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
Dave Hansen <dave.hansen@...ux.intel.com>, Andy Lutomirski
<luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>, Borislav Petkov <bp@...en8.de>,
"Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
Oleg Vasilev <oleg@...n.tech>, Arthur Petukhovsky <arthur@...n.tech>,
Stefan Radig <stefan@...n.tech>, Misha Sakhnov <misha@...n.tech>
Subject: [PATCH v3 2/2] x86/mm: Use GFP_KERNEL for alloc_low_pages() after
boot
Currently it's GFP_ATOMIC. GFP_KERNEL seems more correct.
>From Ingo M. [1]
> There's no real reason why it should be GFP_ATOMIC AFAICS, other than
> some historic inertia that nobody bothered to fix.
and previously Mike R. [2]
> The few callers that effectively use page allocator for the direct map
> updates are gart_iommu_init() and memory hotplug. Neither of them
> happen in an atomic context so there is no reason to use GFP_ATOMIC
> for these allocations.
>
> Replace GFP_ATOMIC with GFP_KERNEL to avoid using atomic reserves for
> allocations that do not require that.
[1]: https://lore.kernel.org/all/aEE6_S2a-1tk1dtI@gmail.com/
[2]: https://lore.kernel.org/all/20211111110241.25968-5-rppt@kernel.org/
Signed-off-by: Em Sharnoff <sharnoff@...n.tech>
---
Changelog:
- v2: Add this patch
- v3: No changes
---
arch/x86/mm/init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index a2665b6fe376..3a25cd9e9076 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -131,7 +131,7 @@ __ref void *alloc_low_pages(unsigned int num)
unsigned int order;
order = get_order((unsigned long)num << PAGE_SHIFT);
- return (void *)__get_free_pages(GFP_ATOMIC | __GFP_ZERO, order);
+ return (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order);
}
if ((pgt_buf_end + num) > pgt_buf_top || !can_use_brk_pgt) {
--
2.39.5
Powered by blists - more mailing lists