[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <177072070352.542.12916536901134546139.tip-bot2@tip-bot2>
Date: Tue, 10 Feb 2026 10:51:43 -0000
From: "tip-bot2 for Thomas Gleixner" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Thomas Gleixner <tglx@...utronix.de>, Thomas Gleixner <tglx@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Alexei Starovoitov <ast@...nel.org>, Vlastimil Babka <vbabka@...e.cz>,
x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [tip: core/debugobjects] debugobject: Make it work with deferred page
initialization - again
The following commit has been merged into the core/debugobjects branch of tip:
Commit-ID: 998d5a6164d9fb33224f24b3e36dac0a8b0496e5
Gitweb: https://git.kernel.org/tip/998d5a6164d9fb33224f24b3e36dac0a8b0496e5
Author: Thomas Gleixner <tglx@...nel.org>
AuthorDate: Sat, 07 Feb 2026 14:27:05 +01:00
Committer: Thomas Gleixner <tglx@...nel.org>
CommitterDate: Tue, 10 Feb 2026 11:47:35 +01:00
debugobject: Make it work with deferred page initialization - again
debugobjects uses __GFP_HIGH for allocations as it might be invoked
within locked regions. That worked perfectly fine until v6.18. It still
works correctly when deferred page initialization is disabled and works
by chance when no page allocation is required before deferred page
initialization has completed.
Since v6.18 allocations w/o a reclaim flag cause new_slab() to end up in
alloc_frozen_pages_nolock_noprof(), which returns early when deferred
page initialization has not yet completed. As the deferred page
initialization takes quite a while the debugobject pool is depleted and
debugobjects are disabled.
This can be worked around when PREEMPT_COUNT is enabled as that allows
debugobjects to add __GFP_KSWAPD_RECLAIM to the GFP flags when the context
is preemtible. When PREEMPT_COUNT is disabled the context is unknown and
the reclaim bit can't be set because the caller might hold locks which
might deadlock in the allocator.
In preemptible context the reclaim bit is harmless and not a performance
issue as that's usually invoked from slow path initialization context.
That makes debugobjects depend on PREEMPT_COUNT || !DEFERRED_STRUCT_PAGE_INIT.
Fixes: af92793e52c3 ("slab: Introduce kmalloc_nolock() and kfree_nolock().")
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Thomas Gleixner <tglx@...nel.org>
Tested-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Acked-by: Alexei Starovoitov <ast@...nel.org>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
Link: https://patch.msgid.link/87pl6gznti.ffs@tglx
---
lib/Kconfig.debug | 1 +
lib/debugobjects.c | 19 ++++++++++++++++++-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ba36939..a874146 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -723,6 +723,7 @@ source "mm/Kconfig.debug"
config DEBUG_OBJECTS
bool "Debug object operations"
+ depends on PREEMPT_COUNT || !DEFERRED_STRUCT_PAGE_INIT
depends on DEBUG_KERNEL
help
If you say Y here, additional code will be inserted into the
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 89a1d67..12f50de 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -398,9 +398,26 @@ static void fill_pool(void)
atomic_inc(&cpus_allocating);
while (pool_should_refill(&pool_global)) {
+ gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;
HLIST_HEAD(head);
- if (!kmem_alloc_batch(&head, obj_cache, __GFP_HIGH | __GFP_NOWARN))
+ /*
+ * Allow reclaim only in preemptible context and during
+ * early boot. If not preemptible, the caller might hold
+ * locks causing a deadlock in the allocator.
+ *
+ * If the reclaim flag is not set during early boot then
+ * allocations, which happen before deferred page
+ * initialization has completed, will fail.
+ *
+ * In preemptible context the flag is harmless and not a
+ * performance issue as that's usually invoked from slow
+ * path initialization context.
+ */
+ if (preemptible() || system_state < SYSTEM_SCHEDULING)
+ gfp |= __GFP_KSWAPD_RECLAIM;
+
+ if (!kmem_alloc_batch(&head, obj_cache, gfp))
break;
guard(raw_spinlock_irqsave)(&pool_lock);
Powered by blists - more mailing lists