[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <e78360a883edac7bc3c6a351c99a6019beacf264.1694625260.git.andreyknvl@google.com>
Date: Wed, 13 Sep 2023 19:14:31 +0200
From: andrey.konovalov@...ux.dev
To: Marco Elver <elver@...gle.com>,
Alexander Potapenko <glider@...gle.com>
Cc: Andrey Konovalov <andreyknvl@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>, kasan-dev@...glegroups.com,
Evgenii Stepanov <eugenis@...gle.com>,
Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrey Konovalov <andreyknvl@...gle.com>
Subject: [PATCH v2 06/19] lib/stackdepot: fix and clean-up atomic annotations
From: Andrey Konovalov <andreyknvl@...gle.com>
Simplify comments accompanying the use of atomic accesses in the
stack depot code.
Also drop smp_load_acquire from next_pool_required in depot_init_pool,
as both depot_init_pool and the all smp_store_release's to this variable
are executed under the stack depot lock.
Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com>
---
This patch is not strictly required, as the atomic accesses are fully
removed in one of the latter patches. However, I decided to keep the
patch just in case we end up needing these atomics in the following
iterations of this series.
Changes v1->v2:
- Minor comment fix as suggested by Marco.
- Drop READ_ONCE marking for next_pool_required.
---
lib/stackdepot.c | 27 ++++++++++++---------------
1 file changed, 12 insertions(+), 15 deletions(-)
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 128ece21afe9..babd453261f0 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -225,10 +225,8 @@ static void depot_init_pool(void **prealloc)
/*
* If the next pool is already initialized or the maximum number of
* pools is reached, do not use the preallocated memory.
- * smp_load_acquire() here pairs with smp_store_release() below and
- * in depot_alloc_stack().
*/
- if (!smp_load_acquire(&next_pool_required))
+ if (!next_pool_required)
return;
/* Check if the current pool is not yet allocated. */
@@ -249,8 +247,8 @@ static void depot_init_pool(void **prealloc)
* At this point, either the next pool is initialized or the
* maximum number of pools is reached. In either case, take
* note that initializing another pool is not required.
- * This smp_store_release pairs with smp_load_acquire() above
- * and in stack_depot_save().
+ * smp_store_release pairs with smp_load_acquire in
+ * stack_depot_save.
*/
smp_store_release(&next_pool_required, 0);
}
@@ -274,15 +272,15 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
/*
* Move on to the next pool.
* WRITE_ONCE pairs with potential concurrent read in
- * stack_depot_fetch().
+ * stack_depot_fetch.
*/
WRITE_ONCE(pool_index, pool_index + 1);
pool_offset = 0;
/*
* If the maximum number of pools is not reached, take note
* that the next pool needs to initialized.
- * smp_store_release() here pairs with smp_load_acquire() in
- * stack_depot_save() and depot_init_pool().
+ * smp_store_release pairs with smp_load_acquire in
+ * stack_depot_save.
*/
if (pool_index + 1 < DEPOT_MAX_POOLS)
smp_store_release(&next_pool_required, 1);
@@ -324,7 +322,7 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
union handle_parts parts = { .handle = handle };
/*
* READ_ONCE pairs with potential concurrent write in
- * depot_alloc_stack().
+ * depot_alloc_stack.
*/
int pool_index_cached = READ_ONCE(pool_index);
void *pool;
@@ -413,8 +411,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
/*
* Fast path: look the stack trace up without locking.
- * The smp_load_acquire() here pairs with smp_store_release() to
- * |bucket| below.
+ * smp_load_acquire pairs with smp_store_release to |bucket| below.
*/
found = find_stack(smp_load_acquire(bucket), entries, nr_entries, hash);
if (found)
@@ -424,8 +421,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
* Check if another stack pool needs to be initialized. If so, allocate
* the memory now - we won't be able to do that under the lock.
*
- * The smp_load_acquire() here pairs with smp_store_release() to
- * |next_pool_inited| in depot_alloc_stack() and depot_init_pool().
+ * smp_load_acquire pairs with smp_store_release in depot_alloc_stack
+ * and depot_init_pool.
*/
if (unlikely(can_alloc && smp_load_acquire(&next_pool_required))) {
/*
@@ -451,8 +448,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
if (new) {
new->next = *bucket;
/*
- * This smp_store_release() pairs with
- * smp_load_acquire() from |bucket| above.
+ * smp_store_release pairs with smp_load_acquire
+ * from |bucket| above.
*/
smp_store_release(bucket, new);
found = new;
--
2.25.1
Powered by blists - more mailing lists