[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240904133944.2124-4-thunder.leizhen@huawei.com>
Date: Wed, 4 Sep 2024 21:39:41 +0800
From: Zhen Lei <thunder.leizhen@...wei.com>
To: Andrew Morton <akpm@...ux-foundation.org>, Thomas Gleixner
<tglx@...utronix.de>, <linux-kernel@...r.kernel.org>
CC: Zhen Lei <thunder.leizhen@...wei.com>
Subject: [PATCH v2 3/6] debugobjects: Remove redundant checks in fill_pool()
(1) if (READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level)
return;
(2) while (READ_ONCE(obj_nr_tofree) &&
READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) {
raw_spin_lock_irqsave(&pool_lock, flags);
(3) while (obj_nr_tofree &&
(3) (obj_pool_free < debug_objects_pool_min_level)) {
... ...
}
raw_spin_unlock_irqrestore(&pool_lock, flags);
}
The conditions for the outer loop (2) and inner loop (3) are exactly the
same. The inner loop is completed under the protection of the spinlock.
When the inner loop ends, at least one of the two loop conditions must be
met. The time from the end of the inner loop to the restart of the outer
loop is extremely short, and the probability of other cores modifying
'obj_nr_tofree' and 'obj_pool_free' is almost zero during this time. In
fact, after the outer loop ends, it is still possible for other cores to
modify the values of these two variables. Therefore, restarting the outer
loop has no practical effect except for an additional check. So the outer
'while' should be changed to 'if'. Then we'll see that the second
condition of the new 'if' is already guaranteed above (1) and can be
removed.
Signed-off-by: Zhen Lei <thunder.leizhen@...wei.com>
---
lib/debugobjects.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 6329a86edcf12ac..7a8ccc94cb037ba 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -135,15 +135,13 @@ static void fill_pool(void)
return;
/*
- * Reuse objs from the global free list; they will be reinitialized
+ * Reuse objs from the global tofree list; they will be reinitialized
* when allocating.
*
- * Both obj_nr_tofree and obj_pool_free are checked locklessly; the
- * READ_ONCE()s pair with the WRITE_ONCE()s in pool_lock critical
- * sections.
+ * The obj_nr_tofree is checked locklessly; the READ_ONCE() pair with
+ * the WRITE_ONCE() in pool_lock critical sections.
*/
- while (READ_ONCE(obj_nr_tofree) &&
- READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) {
+ if (READ_ONCE(obj_nr_tofree)) {
raw_spin_lock_irqsave(&pool_lock, flags);
/*
* Recheck with the lock held as the worker thread might have
--
2.34.1
Powered by blists - more mailing lists