[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241014105912.3207374-15-ryan.roberts@arm.com>
Date: Mon, 14 Oct 2024 11:58:22 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
David Hildenbrand <david@...hat.com>,
Greg Marsden <greg.marsden@...cle.com>,
Ivan Ivanov <ivan.ivanov@...e.com>,
Kalesh Singh <kaleshsingh@...gle.com>,
Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Matthias Brugger <mbrugger@...e.com>,
Miroslav Benes <mbenes@...e.cz>,
Will Deacon <will@...nel.org>
Cc: Ryan Roberts <ryan.roberts@....com>,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [RFC PATCH v1 15/57] stackdepot: Remove PAGE_SIZE compile-time constant assumption
To prepare for supporting boot-time page size selection, refactor code
to remove assumptions about PAGE_SIZE being compile-time constant. Code
intended to be equivalent when compile-time page size is active.
"union handle_parts" previously calculated the number of bits required
for its pool index and offset members based on PAGE_SHIFT. This is
problematic for boot-time page size builds because the actual page size
isn't known until boot-time.
We could use PAGE_SHIFT_MAX in calculating the worst case offset bits,
but bits would be wasted that could be used for pool index when
PAGE_SIZE is set smaller than MAX, the end result being that stack depot
can address less memory than it should.
To avoid needing to dynamically define the offset and index bit widths,
let's instead fix the pool size and derive the order at runtime based on
the PAGE_SIZE. This means that the fields' widths can remain static,
with the down side being slightly increased risk of failing to allocate
the large folio.
This only affects boot-time page size builds. compile-time page size
builds will still always allocate order-2 folios.
Additionally, wrap global variables that are initialized with PAGE_SIZE
derived values using DEFINE_GLOBAL_PAGE_SIZE_VAR() so their
initialization can be deferred for boot-time page size builds.
Signed-off-by: Ryan Roberts <ryan.roberts@....com>
---
***NOTE***
Any confused maintainers may want to read the cover note here for context:
https://lore.kernel.org/all/20241014105514.3206191-1-ryan.roberts@arm.com/
include/linux/stackdepot.h | 6 +++---
lib/stackdepot.c | 6 +++---
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
index e9ec32fb97d4a..ac877a4e90406 100644
--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -32,10 +32,10 @@ typedef u32 depot_stack_handle_t;
#define DEPOT_HANDLE_BITS (sizeof(depot_stack_handle_t) * 8)
-#define DEPOT_POOL_ORDER 2 /* Pool size order, 4 pages */
-#define DEPOT_POOL_SIZE (1LL << (PAGE_SHIFT + DEPOT_POOL_ORDER))
+#define DEPOT_POOL_ORDER 2 /* Pool size order, 4 pages of PAGE_SIZE_MAX */
+#define DEPOT_POOL_SIZE (1LL << (PAGE_SHIFT_MAX + DEPOT_POOL_ORDER))
#define DEPOT_STACK_ALIGN 4
-#define DEPOT_OFFSET_BITS (DEPOT_POOL_ORDER + PAGE_SHIFT - DEPOT_STACK_ALIGN)
+#define DEPOT_OFFSET_BITS (DEPOT_POOL_ORDER + PAGE_SHIFT_MAX - DEPOT_STACK_ALIGN)
#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_OFFSET_BITS - \
STACK_DEPOT_EXTRA_BITS)
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 5ed34cc963fc3..974351f0e9e3c 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -68,7 +68,7 @@ static void *new_pool;
/* Number of pools in stack_pools. */
static int pools_num;
/* Offset to the unused space in the currently used pool. */
-static size_t pool_offset = DEPOT_POOL_SIZE;
+static DEFINE_GLOBAL_PAGE_SIZE_VAR(size_t, pool_offset, DEPOT_POOL_SIZE);
/* Freelist of stack records within stack_pools. */
static LIST_HEAD(free_stacks);
/* The lock must be held when performing pool or freelist modifications. */
@@ -625,7 +625,7 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
*/
if (unlikely(can_alloc && !READ_ONCE(new_pool))) {
page = alloc_pages(gfp_nested_mask(alloc_flags),
- DEPOT_POOL_ORDER);
+ get_order(DEPOT_POOL_SIZE));
if (page)
prealloc = page_address(page);
}
@@ -663,7 +663,7 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
exit:
if (prealloc) {
/* Stack depot didn't use this memory, free it. */
- free_pages((unsigned long)prealloc, DEPOT_POOL_ORDER);
+ free_pages((unsigned long)prealloc, get_order(DEPOT_POOL_SIZE));
}
if (found)
handle = found->handle.handle;
--
2.43.0
Powered by blists - more mailing lists