[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251216211921.1401147-2-urezki@gmail.com>
Date: Tue, 16 Dec 2025 22:19:21 +0100
From: "Uladzislau Rezki (Sony)" <urezki@...il.com>
To: linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Vishal Moola <vishal.moola@...il.com>,
Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>,
Baoquan He <bhe@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Uladzislau Rezki <urezki@...il.com>
Subject: [PATCH 2/2] mm/vmalloc: Add attempt_larger_order_alloc parameter
Introduce a module parameter to enable or disable the large-order
allocation path in vmalloc. High-order allocations are disabled by
default so far, but users may explicitly enable them at runtime if
desired.
High-order pages allocated for vmalloc are immediately split into
order-0 pages and later freed as order-0, which means they do not
feed the per-CPU page caches. As a result, high-order attempts tend
to bypass the PCP fastpath and fall back to the buddy allocator that
can affect performance.
However, when the PCP caches are empty, high-order allocations may
show better performance characteristics especially for larger
allocation requests.
Since the best strategy is workload-dependent, this patch adds a
parameter letting users to choose whether vmalloc should try
high-order allocations or stay strictly on the order-0 fastpath.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
---
mm/vmalloc.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d3a4725e15ca..f66543896b16 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -43,6 +43,7 @@
#include <asm/tlbflush.h>
#include <asm/shmparam.h>
#include <linux/page_owner.h>
+#include <linux/moduleparam.h>
#define CREATE_TRACE_POINTS
#include <trace/events/vmalloc.h>
@@ -3671,6 +3672,9 @@ vm_area_alloc_pages_large_order(gfp_t gfp, int nid, unsigned int order,
return nr_allocated;
}
+static int attempt_larger_order_alloc;
+module_param(attempt_larger_order_alloc, int, 0644);
+
static inline unsigned int
vm_area_alloc_pages(gfp_t gfp, int nid,
unsigned int order, unsigned int nr_pages, struct page **pages)
@@ -3679,8 +3683,9 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
struct page *page;
int i;
- nr_allocated = vm_area_alloc_pages_large_order(gfp, nid,
- order, nr_pages, pages);
+ if (attempt_larger_order_alloc)
+ nr_allocated = vm_area_alloc_pages_large_order(gfp, nid,
+ order, nr_pages, pages);
/*
* For order-0 pages we make use of bulk allocator, if
--
2.47.3
Powered by blists - more mailing lists