lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aO97BjvNZNh0UV3u@fedora>
Date: Wed, 15 Oct 2025 03:44:22 -0700
From: "Vishal Moola (Oracle)" <vishal.moola@...il.com>
To: Uladzislau Rezki <urezki@...il.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH] mm/vmalloc: request large order pages from buddy
 allocator

On Wed, Oct 15, 2025 at 10:23:19AM +0200, Uladzislau Rezki wrote:
> On Tue, Oct 14, 2025 at 11:27:54AM -0700, Vishal Moola (Oracle) wrote:
> > Sometimes, vm_area_alloc_pages() will want many pages from the buddy
> > allocator. Rather than making requests to the buddy allocator for at
> > most 100 pages at a time, we can eagerly request large order pages a
> > smaller number of times.
> > 
> > We still split the large order pages down to order-0 as the rest of the
> > vmalloc code (and some callers) depend on it. We still defer to the bulk
> > allocator and fallback path in case of order-0 pages or failure.
> > 
> > Running 1000 iterations of allocations on a small 4GB system finds:
> > 
> > 1000 2mb allocations:
> > 	[Baseline]			[This patch]
> > 	real    46.310s			real    34.380s
> > 	user    0.001s			user    0.008s
> > 	sys     46.058s			sys     34.152s
> > 
> > 10000 200kb allocations:
> > 	[Baseline]			[This patch]
> > 	real    56.104s			real    43.946s
> > 	user    0.001s			user    0.003s
> > 	sys     55.375s			sys     43.259s
> > 
> > 10000 20kb allocations:
> > 	[Baseline]			[This patch]
> > 	real    0m8.438s		real    0m9.160s
> > 	user    0m0.001s		user    0m0.002s
> > 	sys     0m7.936s		sys     0m8.671s
> > 
> > This is an RFC, comments and thoughts are welcomed. There is a
> > clear benefit to be had for large allocations, but there is
> > some regression for smaller allocations.
> > 
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@...il.com>
> > ---
> >  mm/vmalloc.c | 34 +++++++++++++++++++++++++++++++++-
> >  1 file changed, 33 insertions(+), 1 deletion(-)
> > 
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 97cef2cc14d3..0a25e5cf841c 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -3621,6 +3621,38 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
> >  	unsigned int nr_allocated = 0;
> >  	struct page *page;
> >  	int i;
> > +	gfp_t large_gfp = (gfp & ~__GFP_DIRECT_RECLAIM) | __GFP_NOWARN;
> > +	unsigned int large_order = ilog2(nr_pages - nr_allocated);
> >
> If large_order is > MAX_ORDER - 1 then there is no need even try
> larger_order attempt.
> 
> >> unsigned int large_order = ilog2(nr_pages - nr_allocated);
> I think, it is better to introduce "remaining" variable which
> is nr_pages - nr_allocated. And on entry "remaining" can be set
> to just nr_pages because "nr_allocated" is zero.

I like the idea too.

> Maybe it is worth to drop/warn if __GFP_COMP is set also?

split_page() has a BUG_ON(PageCompound) within, so we don't need one out
here for now.

> > +
> > +	/*
> > +	 * Initially, attempt to have the page allocator give us large order
> > +	 * pages. Do not attempt allocating smaller than order chunks since
> > +	 * __vmap_pages_range() expects physically contigous pages of exactly
> > +	 * order long chunks.
> > +	 */
> > +	while (large_order > order && nr_allocated < nr_pages) {
> > +		/*
> > +		 * High-order nofail allocations are really expensive and
> > +		 * potentially dangerous (pre-mature OOM, disruptive reclaim
> > +		 * and compaction etc.
> > +		 */
> > +		if (gfp & __GFP_NOFAIL)
> > +			break;
> > +		if (nid == NUMA_NO_NODE)
> > +			page = alloc_pages_noprof(large_gfp, large_order);
> > +		else
> > +			page = alloc_pages_node_noprof(nid, large_gfp, large_order);
> > +
> > +		if (unlikely(!page))
> > +			break;
> > +
> > +		split_page(page, large_order);
> > +		for (i = 0; i < (1U << large_order); i++)
> > +			pages[nr_allocated + i] = page + i;
> > +
> > +		nr_allocated += 1U << large_order;
> > +		large_order = ilog2(nr_pages - nr_allocated);
> > +	}
> >  
> So this is a third path for page allocation. The question is should we
> try all orders? Like already noted by Matthew, if there is no 5-order
> page but there is 4-order page? Try until we check all orders. For
> example we can get different order pages to fulfill the request.
>
> The concern is then if it is a waste of high-order pages. Because we can
> easily go with a single page allocator. Whereas someone in a system can not.

I feel like if we have high order pages available we'd rather allocate
those. Since the buddy allocator just coalesces the pages when they're
freed again, as soon as these allocations free up we are much more
likely to have large order pages ready to go again.

> Apart of that, maybe we can drop the bulk_path instead of having three paths?

Probably. I'd say that just depends on whether we care about maintaining
the optimizations for smaller vmallocs() - which I have no strong opinion
on.

> --
> Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ