[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8f649d7e5919c56bcc5d2d356c9584fdcb87800e.1698077459.git.andreyknvl@google.com>
Date: Mon, 23 Oct 2023 18:22:37 +0200
From: andrey.konovalov@...ux.dev
To: Marco Elver <elver@...gle.com>,
Alexander Potapenko <glider@...gle.com>
Cc: Andrey Konovalov <andreyknvl@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>, kasan-dev@...glegroups.com,
Evgenii Stepanov <eugenis@...gle.com>,
Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrey Konovalov <andreyknvl@...gle.com>
Subject: [PATCH v3 06/19] lib/stackdepot: fix and clean-up atomic annotations
From: Andrey Konovalov <andreyknvl@...gle.com>
Drop smp_load_acquire from next_pool_required in depot_init_pool, as both
depot_init_pool and the all smp_store_release's to this variable are
executed under the stack depot lock.
Also simplify and clean up comments accompanying the use of atomic
accesses in the stack depot code.
Reviewed-by: Alexander Potapenko <glider@...gle.com>
Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com>
---
This patch is not strictly required, as the atomic accesses are fully
removed in one of the latter patches. However, I decided to keep the
patch just in case we end up needing these atomics in the following
iterations of this series.
Changes v2->v3:
- Keep parentheses when referring to functions in comments.
- Add comment that explains why depot_init_pool reads next_pool_required
non-atomically.
Changes v1->v2:
- Minor comment fix as suggested by Marco.
- Drop READ_ONCE marking for next_pool_required.
---
lib/stackdepot.c | 29 ++++++++++++++---------------
1 file changed, 14 insertions(+), 15 deletions(-)
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 128ece21afe9..60aea549429a 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -225,10 +225,10 @@ static void depot_init_pool(void **prealloc)
/*
* If the next pool is already initialized or the maximum number of
* pools is reached, do not use the preallocated memory.
- * smp_load_acquire() here pairs with smp_store_release() below and
- * in depot_alloc_stack().
+ * Access next_pool_required non-atomically, as there are no concurrent
+ * write accesses to this variable.
*/
- if (!smp_load_acquire(&next_pool_required))
+ if (!next_pool_required)
return;
/* Check if the current pool is not yet allocated. */
@@ -249,8 +249,8 @@ static void depot_init_pool(void **prealloc)
* At this point, either the next pool is initialized or the
* maximum number of pools is reached. In either case, take
* note that initializing another pool is not required.
- * This smp_store_release pairs with smp_load_acquire() above
- * and in stack_depot_save().
+ * smp_store_release() pairs with smp_load_acquire() in
+ * stack_depot_save().
*/
smp_store_release(&next_pool_required, 0);
}
@@ -273,7 +273,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
/*
* Move on to the next pool.
- * WRITE_ONCE pairs with potential concurrent read in
+ * WRITE_ONCE() pairs with potential concurrent read in
* stack_depot_fetch().
*/
WRITE_ONCE(pool_index, pool_index + 1);
@@ -281,8 +281,8 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
/*
* If the maximum number of pools is not reached, take note
* that the next pool needs to initialized.
- * smp_store_release() here pairs with smp_load_acquire() in
- * stack_depot_save() and depot_init_pool().
+ * smp_store_release() pairs with smp_load_acquire() in
+ * stack_depot_save().
*/
if (pool_index + 1 < DEPOT_MAX_POOLS)
smp_store_release(&next_pool_required, 1);
@@ -323,7 +323,7 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
{
union handle_parts parts = { .handle = handle };
/*
- * READ_ONCE pairs with potential concurrent write in
+ * READ_ONCE() pairs with potential concurrent write in
* depot_alloc_stack().
*/
int pool_index_cached = READ_ONCE(pool_index);
@@ -413,8 +413,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
/*
* Fast path: look the stack trace up without locking.
- * The smp_load_acquire() here pairs with smp_store_release() to
- * |bucket| below.
+ * smp_load_acquire() pairs with smp_store_release() to |bucket| below.
*/
found = find_stack(smp_load_acquire(bucket), entries, nr_entries, hash);
if (found)
@@ -424,8 +423,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
* Check if another stack pool needs to be initialized. If so, allocate
* the memory now - we won't be able to do that under the lock.
*
- * The smp_load_acquire() here pairs with smp_store_release() to
- * |next_pool_inited| in depot_alloc_stack() and depot_init_pool().
+ * smp_load_acquire() pairs with smp_store_release() in
+ * depot_alloc_stack() and depot_init_pool().
*/
if (unlikely(can_alloc && smp_load_acquire(&next_pool_required))) {
/*
@@ -451,8 +450,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
if (new) {
new->next = *bucket;
/*
- * This smp_store_release() pairs with
- * smp_load_acquire() from |bucket| above.
+ * smp_store_release() pairs with smp_load_acquire()
+ * from |bucket| above.
*/
smp_store_release(bucket, new);
found = new;
--
2.25.1
Powered by blists - more mailing lists