[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251014182754.4329-1-vishal.moola@gmail.com>
Date: Tue, 14 Oct 2025 11:27:54 -0700
From: "Vishal Moola (Oracle)" <vishal.moola@...il.com>
To: linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Uladzislau Rezki <urezki@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Vishal Moola (Oracle)" <vishal.moola@...il.com>
Subject: [RFC PATCH] mm/vmalloc: request large order pages from buddy allocator
Sometimes, vm_area_alloc_pages() will want many pages from the buddy
allocator. Rather than making requests to the buddy allocator for at
most 100 pages at a time, we can eagerly request large order pages a
smaller number of times.
We still split the large order pages down to order-0 as the rest of the
vmalloc code (and some callers) depend on it. We still defer to the bulk
allocator and fallback path in case of order-0 pages or failure.
Running 1000 iterations of allocations on a small 4GB system finds:
1000 2mb allocations:
[Baseline] [This patch]
real 46.310s real 34.380s
user 0.001s user 0.008s
sys 46.058s sys 34.152s
10000 200kb allocations:
[Baseline] [This patch]
real 56.104s real 43.946s
user 0.001s user 0.003s
sys 55.375s sys 43.259s
10000 20kb allocations:
[Baseline] [This patch]
real 0m8.438s real 0m9.160s
user 0m0.001s user 0m0.002s
sys 0m7.936s sys 0m8.671s
This is an RFC, comments and thoughts are welcomed. There is a
clear benefit to be had for large allocations, but there is
some regression for smaller allocations.
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@...il.com>
---
mm/vmalloc.c | 34 +++++++++++++++++++++++++++++++++-
1 file changed, 33 insertions(+), 1 deletion(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 97cef2cc14d3..0a25e5cf841c 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3621,6 +3621,38 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
unsigned int nr_allocated = 0;
struct page *page;
int i;
+ gfp_t large_gfp = (gfp & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN;
+ unsigned int large_order = ilog2(nr_pages - nr_allocated);
+
+ /*
+ * Initially, attempt to have the page allocator give us large order
+ * pages. Do not attempt allocating smaller than order chunks since
+ * __vmap_pages_range() expects physically contigous pages of exactly
+ * order long chunks.
+ */
+ while (large_order > order && nr_allocated < nr_pages) {
+ /*
+ * High-order nofail allocations are really expensive and
+ * potentially dangerous (pre-mature OOM, disruptive reclaim
+ * and compaction etc.
+ */
+ if (gfp & __GFP_NOFAIL)
+ break;
+ if (nid == NUMA_NO_NODE)
+ page = alloc_pages_noprof(large_gfp, large_order);
+ else
+ page = alloc_pages_node_noprof(nid, large_gfp, large_order);
+
+ if (unlikely(!page))
+ break;
+
+ split_page(page, large_order);
+ for (i = 0; i < (1U << large_order); i++)
+ pages[nr_allocated + i] = page + i;
+
+ nr_allocated += 1U << large_order;
+ large_order = ilog2(nr_pages - nr_allocated);
+ }
/*
* For order-0 pages we make use of bulk allocator, if
@@ -3665,7 +3697,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
}
}
- /* High-order pages or fallback path if "bulk" fails. */
+ /* High-order arch pages or fallback path if "bulk" fails. */
while (nr_allocated < nr_pages) {
if (!(gfp & __GFP_NOFAIL) && fatal_signal_pending(current))
break;
--
2.51.0
Powered by blists - more mailing lists