[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0f69442d-b44e-4b30-b11e-793511db9f1e@arm.com>
Date: Wed, 17 Dec 2025 15:20:13 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Uladzislau Rezki <urezki@...il.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Vishal Moola <vishal.moola@...il.com>, Dev Jain <dev.jain@....com>,
Baoquan He <bhe@...hat.com>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] mm/vmalloc: Add attempt_larger_order_alloc parameter
On 17/12/2025 12:02, Uladzislau Rezki wrote:
>> On 16/12/2025 21:19, Uladzislau Rezki (Sony) wrote:
>>> Introduce a module parameter to enable or disable the large-order
>>> allocation path in vmalloc. High-order allocations are disabled by
>>> default so far, but users may explicitly enable them at runtime if
>>> desired.
>>>
>>> High-order pages allocated for vmalloc are immediately split into
>>> order-0 pages and later freed as order-0, which means they do not
>>> feed the per-CPU page caches. As a result, high-order attempts tend
>>> to bypass the PCP fastpath and fall back to the buddy allocator that
>>> can affect performance.
>>>
>>> However, when the PCP caches are empty, high-order allocations may
>>> show better performance characteristics especially for larger
>>> allocation requests.
>>
>> I wonder if a better solution would be "allocate order-0 if available in pcp,
>> else try large order, else fallback to order-0" Could that provide the best of
>> all worlds without needing a configuration knob?
>>
> I am not sure, to me it looks like a bit odd.
Perhaps it would feel better if it was generalized to "first try allocation from
PCP list, highest to lowest order, then try allocation from the buddy, highest
to lowest order"?
> Ideally it would be
> good just free it as high-order page and not order-0 peaces.
Yeah perhaps that's better. How about something like this (very lightly tested
and no performance results yet):
(And I should admit I'm not 100% sure it is safe to call free_frozen_pages()
with a contiguous run of order-0 pages, but I'm not seeing any warnings or
memory leaks when running mm selftests...)
---8<---
commit caa3e5eb5bfade81a32fa62d1a8924df1eb0f619
Author: Ryan Roberts <ryan.roberts@....com>
Date: Wed Dec 17 15:11:08 2025 +0000
WIP
Signed-off-by: Ryan Roberts <ryan.roberts@....com>
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index b155929af5b1..d25f5b867e6b 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -383,6 +383,8 @@ extern void __free_pages(struct page *page, unsigned int order);
extern void free_pages_nolock(struct page *page, unsigned int order);
extern void free_pages(unsigned long addr, unsigned int order);
+void free_pages_bulk(struct page *page, int nr_pages);
+
#define __free_page(page) __free_pages((page), 0)
#define free_page(addr) free_pages((addr), 0)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 822e05f1a964..5f11224cf353 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5304,6 +5304,48 @@ static void ___free_pages(struct page *page, unsigned int
order,
}
}
+static void free_frozen_pages_bulk(struct page *page, int nr_pages)
+{
+ while (nr_pages) {
+ unsigned int fit_order, align_order, order;
+ unsigned long pfn;
+
+ pfn = page_to_pfn(page);
+ fit_order = ilog2(nr_pages);
+ align_order = pfn ? __ffs(pfn) : fit_order;
+ order = min3(fit_order, align_order, MAX_PAGE_ORDER);
+
+ free_frozen_pages(page, order);
+
+ page += 1U << order;
+ nr_pages -= 1U << order;
+ }
+}
+
+void free_pages_bulk(struct page *page, int nr_pages)
+{
+ struct page *start = NULL;
+ bool can_free;
+ int i;
+
+ for (i = 0; i < nr_pages; i++, page++) {
+ VM_BUG_ON_PAGE(PageHead(page), page);
+ VM_BUG_ON_PAGE(PageTail(page), page);
+
+ can_free = put_page_testzero(page);
+
+ if (!can_free && start) {
+ free_frozen_pages_bulk(start, page - start);
+ start = NULL;
+ } else if (can_free && !start) {
+ start = page;
+ }
+ }
+
+ if (start)
+ free_frozen_pages_bulk(start, page - start);
+}
+
/**
* __free_pages - Free pages allocated with alloc_pages().
* @page: The page pointer returned from alloc_pages().
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ecbac900c35f..8f782bac1ece 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3429,7 +3429,8 @@ void vfree_atomic(const void *addr)
void vfree(const void *addr)
{
struct vm_struct *vm;
- int i;
+ struct page *start;
+ int i, nr;
if (unlikely(in_interrupt())) {
vfree_atomic(addr);
@@ -3455,17 +3456,26 @@ void vfree(const void *addr)
/* All pages of vm should be charged to same memcg, so use first one. */
if (vm->nr_pages && !(vm->flags & VM_MAP_PUT_PAGES))
mod_memcg_page_state(vm->pages[0], MEMCG_VMALLOC, -vm->nr_pages);
- for (i = 0; i < vm->nr_pages; i++) {
+
+ start = vm->pages[0];
+ BUG_ON(!start);
+ nr = 1;
+ for (i = 1; i < vm->nr_pages; i++) {
struct page *page = vm->pages[i];
BUG_ON(!page);
- /*
- * High-order allocs for huge vmallocs are split, so
- * can be freed as an array of order-0 allocations
- */
- __free_page(page);
- cond_resched();
+
+ if (start + nr != page) {
+ free_pages_bulk(start, nr);
+ start = page;
+ nr = 1;
+ cond_resched();
+ } else {
+ nr++;
+ }
}
+ free_pages_bulk(start, nr);
+
if (!(vm->flags & VM_MAP_PUT_PAGES))
atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
kvfree(vm->pages);
---8<---
>
>>>
>>> Since the best strategy is workload-dependent, this patch adds a
>>> parameter letting users to choose whether vmalloc should try
>>> high-order allocations or stay strictly on the order-0 fastpath.
>>>
>>> Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
>>> ---
>>> mm/vmalloc.c | 9 +++++++--
>>> 1 file changed, 7 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>>> index d3a4725e15ca..f66543896b16 100644
>>> --- a/mm/vmalloc.c
>>> +++ b/mm/vmalloc.c
>>> @@ -43,6 +43,7 @@
>>> #include <asm/tlbflush.h>
>>> #include <asm/shmparam.h>
>>> #include <linux/page_owner.h>
>>> +#include <linux/moduleparam.h>
>>>
>>> #define CREATE_TRACE_POINTS
>>> #include <trace/events/vmalloc.h>
>>> @@ -3671,6 +3672,9 @@ vm_area_alloc_pages_large_order(gfp_t gfp, int nid, unsigned int order,
>>> return nr_allocated;
>>> }
>>>
>>> +static int attempt_larger_order_alloc;
>>> +module_param(attempt_larger_order_alloc, int, 0644);
>>
>> Would this be better as a bool? Docs say that you can then specify 0/1, y/n or
>> Y/N as the value; that's probably more intuitive?
>>
>> nit: I'd favour a shorter name. Perhaps large_order_alloc?
>>
> Thanks! We can switch to bool and use shorter name for sure.
>
> --
> Uladzislau Rezki
Powered by blists - more mailing lists