[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <172900658689.1442.6290973730707769095.tip-bot2@tip-bot2>
Date: Tue, 15 Oct 2024 15:36:26 -0000
From: "tip-bot2 for Thomas Gleixner" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Zhen Lei <thunder.leizhen@...wei.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: core/debugobjects] debugobjects: Dont free objects directly on
CPU hotplug
The following commit has been merged into the core/debugobjects branch of tip:
Commit-ID: a2a702383e8baa22ee66ee60f1e036835a1ef42e
Gitweb: https://git.kernel.org/tip/a2a702383e8baa22ee66ee60f1e036835a1ef42e
Author: Thomas Gleixner <tglx@...utronix.de>
AuthorDate: Mon, 07 Oct 2024 18:49:57 +02:00
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitterDate: Tue, 15 Oct 2024 17:30:30 +02:00
debugobjects: Dont free objects directly on CPU hotplug
Freeing the per CPU pool of the unplugged CPU directly is suboptimal as the
objects can be reused in the real pool if there is room. Aside of that this
gets the accounting wrong.
Use the regular free path, which allows reuse and has the accounting correct.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Reviewed-by: Zhen Lei <thunder.leizhen@...wei.com>
Link: https://lore.kernel.org/all/20241007164913.263960570@linutronix.de
---
lib/debugobjects.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 9867412..a3d4c54 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -430,27 +430,28 @@ static void free_object(struct debug_obj *obj)
}
#ifdef CONFIG_HOTPLUG_CPU
-static int object_cpu_offline(unsigned int cpu)
+static void put_objects(struct hlist_head *list)
{
- struct debug_percpu_free *percpu_pool;
struct hlist_node *tmp;
struct debug_obj *obj;
- unsigned long flags;
- /* Remote access is safe as the CPU is dead already */
- percpu_pool = per_cpu_ptr(&percpu_obj_pool, cpu);
- hlist_for_each_entry_safe(obj, tmp, &percpu_pool->free_objs, node) {
+ /*
+ * Using free_object() puts the objects into reuse or schedules
+ * them for freeing and it get's all the accounting correct.
+ */
+ hlist_for_each_entry_safe(obj, tmp, list, node) {
hlist_del(&obj->node);
- kmem_cache_free(obj_cache, obj);
+ free_object(obj);
}
+}
- raw_spin_lock_irqsave(&pool_lock, flags);
- obj_pool_used -= percpu_pool->obj_free;
- debug_objects_freed += percpu_pool->obj_free;
- raw_spin_unlock_irqrestore(&pool_lock, flags);
-
- percpu_pool->obj_free = 0;
+static int object_cpu_offline(unsigned int cpu)
+{
+ /* Remote access is safe as the CPU is dead already */
+ struct debug_percpu_free *pcp = per_cpu_ptr(&percpu_obj_pool, cpu);
+ put_objects(&pcp->free_objs);
+ pcp->obj_free = 0;
return 0;
}
#endif
Powered by blists - more mailing lists