lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aUMDCSTewPSLCbYM@milan>
Date: Wed, 17 Dec 2025 20:22:49 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Uladzislau Rezki <urezki@...il.com>, linux-mm@...ck.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Vishal Moola <vishal.moola@...il.com>, Dev Jain <dev.jain@....com>,
	Baoquan He <bhe@...hat.com>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] mm/vmalloc: Add attempt_larger_order_alloc parameter

On Wed, Dec 17, 2025 at 05:01:19PM +0000, Ryan Roberts wrote:
> On 17/12/2025 15:20, Ryan Roberts wrote:
> > On 17/12/2025 12:02, Uladzislau Rezki wrote:
> >>> On 16/12/2025 21:19, Uladzislau Rezki (Sony) wrote:
> >>>> Introduce a module parameter to enable or disable the large-order
> >>>> allocation path in vmalloc. High-order allocations are disabled by
> >>>> default so far, but users may explicitly enable them at runtime if
> >>>> desired.
> >>>>
> >>>> High-order pages allocated for vmalloc are immediately split into
> >>>> order-0 pages and later freed as order-0, which means they do not
> >>>> feed the per-CPU page caches. As a result, high-order attempts tend
> >>>> to bypass the PCP fastpath and fall back to the buddy allocator that
> >>>> can affect performance.
> >>>>
> >>>> However, when the PCP caches are empty, high-order allocations may
> >>>> show better performance characteristics especially for larger
> >>>> allocation requests.
> >>>
> >>> I wonder if a better solution would be "allocate order-0 if available in pcp,
> >>> else try large order, else fallback to order-0" Could that provide the best of
> >>> all worlds without needing a configuration knob?
> >>>
> >> I am not sure, to me it looks like a bit odd. 
> > 
> > Perhaps it would feel better if it was generalized to "first try allocation from
> > PCP list, highest to lowest order, then try allocation from the buddy, highest
> > to lowest order"?
> > 
> >> Ideally it would be
> >> good just free it as high-order page and not order-0 peaces.
> > 
> > Yeah perhaps that's better. How about something like this (very lightly tested
> > and no performance results yet):
> > 
> > (And I should admit I'm not 100% sure it is safe to call free_frozen_pages()
> > with a contiguous run of order-0 pages, but I'm not seeing any warnings or
> > memory leaks when running mm selftests...)
> > 
> > ---8<---
> > commit caa3e5eb5bfade81a32fa62d1a8924df1eb0f619
> > Author: Ryan Roberts <ryan.roberts@....com>
> > Date:   Wed Dec 17 15:11:08 2025 +0000
> > 
> >     WIP
> > 
> >     Signed-off-by: Ryan Roberts <ryan.roberts@....com>
> > 
> > diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> > index b155929af5b1..d25f5b867e6b 100644
> > --- a/include/linux/gfp.h
> > +++ b/include/linux/gfp.h
> > @@ -383,6 +383,8 @@ extern void __free_pages(struct page *page, unsigned int order);
> >  extern void free_pages_nolock(struct page *page, unsigned int order);
> >  extern void free_pages(unsigned long addr, unsigned int order);
> > 
> > +void free_pages_bulk(struct page *page, int nr_pages);
> > +
> >  #define __free_page(page) __free_pages((page), 0)
> >  #define free_page(addr) free_pages((addr), 0)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 822e05f1a964..5f11224cf353 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -5304,6 +5304,48 @@ static void ___free_pages(struct page *page, unsigned int
> > order,
> >  	}
> >  }
> > 
> > +static void free_frozen_pages_bulk(struct page *page, int nr_pages)
> > +{
> > +	while (nr_pages) {
> > +		unsigned int fit_order, align_order, order;
> > +		unsigned long pfn;
> > +
> > +		pfn = page_to_pfn(page);
> > +		fit_order = ilog2(nr_pages);
> > +		align_order = pfn ? __ffs(pfn) : fit_order;
> > +		order = min3(fit_order, align_order, MAX_PAGE_ORDER);
> > +
> > +		free_frozen_pages(page, order);
> > +
> > +		page += 1U << order;
> > +		nr_pages -= 1U << order;
> > +	}
> > +}
> > +
> > +void free_pages_bulk(struct page *page, int nr_pages)
> > +{
> > +	struct page *start = NULL;
> > +	bool can_free;
> > +	int i;
> > +
> > +	for (i = 0; i < nr_pages; i++, page++) {
> > +		VM_BUG_ON_PAGE(PageHead(page), page);
> > +		VM_BUG_ON_PAGE(PageTail(page), page);
> > +
> > +		can_free = put_page_testzero(page);
> > +
> > +		if (!can_free && start) {
> > +			free_frozen_pages_bulk(start, page - start);
> > +			start = NULL;
> > +		} else if (can_free && !start) {
> > +			start = page;
> > +		}
> > +	}
> > +
> > +	if (start)
> > +		free_frozen_pages_bulk(start, page - start);
> > +}
> > +
> >  /**
> >   * __free_pages - Free pages allocated with alloc_pages().
> >   * @page: The page pointer returned from alloc_pages().
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index ecbac900c35f..8f782bac1ece 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -3429,7 +3429,8 @@ void vfree_atomic(const void *addr)
> >  void vfree(const void *addr)
> >  {
> >  	struct vm_struct *vm;
> > -	int i;
> > +	struct page *start;
> > +	int i, nr;
> > 
> >  	if (unlikely(in_interrupt())) {
> >  		vfree_atomic(addr);
> > @@ -3455,17 +3456,26 @@ void vfree(const void *addr)
> >  	/* All pages of vm should be charged to same memcg, so use first one. */
> >  	if (vm->nr_pages && !(vm->flags & VM_MAP_PUT_PAGES))
> >  		mod_memcg_page_state(vm->pages[0], MEMCG_VMALLOC, -vm->nr_pages);
> > -	for (i = 0; i < vm->nr_pages; i++) {
> > +
> > +	start = vm->pages[0];
> > +	BUG_ON(!start);
> > +	nr = 1;
> > +	for (i = 1; i < vm->nr_pages; i++) {
> >  		struct page *page = vm->pages[i];
> > 
> >  		BUG_ON(!page);
> > -		/*
> > -		 * High-order allocs for huge vmallocs are split, so
> > -		 * can be freed as an array of order-0 allocations
> > -		 */
> > -		__free_page(page);
> > -		cond_resched();
> > +
> > +		if (start + nr != page) {
> > +			free_pages_bulk(start, nr);
> > +			start = page;
> > +			nr = 1;
> > +			cond_resched();
> > +		} else {
> > +			nr++;
> > +		}
> >  	}
> > +	free_pages_bulk(start, nr);
> > +
> >  	if (!(vm->flags & VM_MAP_PUT_PAGES))
> >  		atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
> >  	kvfree(vm->pages);
> > ---8<---
> 
> I tested this on a performance monitoring system and see a huge improvement for 
> the test_vmalloc tests.
> 
> Both columns are compared to v6.18. 6-19-0-rc1 has Vishal's change to allocate 
> large orders, which I previously reported the regressions for. vfree-high-order 
> adds the above patch to free contiguous order-0 pages in bulk.
> 
> (R)/(I) means statistically significant regression/improvement. Results are 
> normalized so that less than zero is regression and greater than zero is 
> improvement.
> 
> +-----------------+----------------------------------------------------------+--------------+------------------+
> | Benchmark       | Result Class                                             |   6-19-0-rc1 | vfree-high-order |
> +=================+==========================================================+==============+==================+
> | micromm/vmalloc | fix_align_alloc_test: p:1, h:0, l:500000 (usec)          |  (R) -40.69% |        (I) 3.98% |
> |                 | fix_size_alloc_test: p:1, h:0, l:500000 (usec)           |        0.10% |           -1.47% |
> |                 | fix_size_alloc_test: p:4, h:0, l:500000 (usec)           |  (R) -22.74% |       (I) 11.57% |
> |                 | fix_size_alloc_test: p:16, h:0, l:500000 (usec)          |  (R) -23.63% |       (I) 47.42% |
> |                 | fix_size_alloc_test: p:16, h:1, l:500000 (usec)          |       -1.58% |      (I) 106.01% |
> |                 | fix_size_alloc_test: p:64, h:0, l:100000 (usec)          |  (R) -24.39% |       (I) 99.12% |
> |                 | fix_size_alloc_test: p:64, h:1, l:100000 (usec)          |    (I) 2.34% |      (I) 196.87% |
> |                 | fix_size_alloc_test: p:256, h:0, l:100000 (usec)         |  (R) -23.29% |      (I) 125.42% |
> |                 | fix_size_alloc_test: p:256, h:1, l:100000 (usec)         |    (I) 3.74% |      (I) 238.59% |
> |                 | fix_size_alloc_test: p:512, h:0, l:100000 (usec)         |  (R) -23.80% |      (I) 132.38% |
> |                 | fix_size_alloc_test: p:512, h:1, l:100000 (usec)         |   (R) -2.84% |      (I) 514.75% |
> |                 | full_fit_alloc_test: p:1, h:0, l:500000 (usec)           |        2.74% |            0.33% |
> |                 | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000 (usec) |        0.58% |            1.36% |
> |                 | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000 (usec) |       -0.66% |            1.48% |
> |                 | long_busy_list_alloc_test: p:1, h:0, l:500000 (usec)     |  (R) -25.24% |       (I) 77.95% |
> |                 | pcpu_alloc_test: p:1, h:0, l:500000 (usec)               |       -0.58% |            0.60% |
> |                 | random_size_align_alloc_test: p:1, h:0, l:500000 (usec)  |  (R) -45.75% |        (I) 8.51% |
> |                 | random_size_alloc_test: p:1, h:0, l:500000 (usec)        |  (R) -28.16% |       (I) 65.34% |
> |                 | vm_map_ram_test: p:1, h:0, l:500000 (usec)               |       -0.54% |           -0.33% |
> +-----------------+----------------------------------------------------------+--------------+------------------+
> 
> What do you think?
> 
You were first :)

Some figures from me:

# Default(3 pages)
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 541868 usec
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 542515 usec
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 541561 usec
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 542951 usec

# Patch(3 pages)
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 585266 usec
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 594301 usec
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 598912 usec
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 589345 usec

Now the perf figures are almost settled and aligned with default!
We do use per-cpu-cache for 3 pages allocations.

# Default(100 pages)
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 5724919 usec
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 5721430 usec
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 5717224 usec

# Patch(100 pages)
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2629600 usec
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2622811 usec
fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2629324 usec

~2x faster! It is because of freeing now occurs much more efficient
so we spent less cycles on free path comparing with default case.

See below, perf also confirms that vfree() ~2x consumes less cycles:

# Default
+   96.99%     0.49%  [test_vmalloc]        [k] fix_size_alloc_test
+   59.64%     2.38%  [kernel]              [k] vfree.part.0
+   45.69%    15.80%  [kernel]              [k] __free_frozen_pages
+   39.83%     0.00%  [kernel]              [k] ret_from_fork_asm
+   39.83%     0.00%  [kernel]              [k] ret_from_fork
+   39.83%     0.00%  [kernel]              [k] kthread
+   38.67%     0.00%  [test_vmalloc]        [k] test_func
+   36.64%     0.01%  [kernel]              [k] __vmalloc_node_noprof
+   36.63%     0.20%  [kernel]              [k] __vmalloc_node_range_noprof
+   17.55%     4.94%  [kernel]              [k] alloc_pages_bulk_noprof
+   16.46%    12.21%  [kernel]              [k] free_frozen_page_commit.isra.0
+   16.06%     8.09%  [kernel]              [k] vmap_small_pages_range_noflush
+   12.56%    10.82%  [kernel]              [k] __rmqueue_pcplist
+    9.45%     9.43%  [kernel]              [k] __get_pfnblock_flags_mask.isra.0
+    7.95%     7.95%  [kernel]              [k] pfn_valid
+    5.77%     0.03%  [kernel]              [k] remove_vm_area
+    5.44%     5.44%  [kernel]              [k] ___free_pages
+    4.67%     4.59%  [kernel]              [k] __vunmap_range_noflush
+    4.30%     4.30%  [kernel]              [k] __list_add_valid_or_report

# Patch
+   94.28%     1.00%  [test_vmalloc]        [k] fix_size_alloc_test
+   55.63%     0.03%  [kernel]              [k] __vmalloc_node_noprof
+   55.60%     3.78%  [kernel]              [k] __vmalloc_node_range_noprof
+   37.26%    19.29%  [kernel]              [k] vmap_small_pages_range_noflush
+   37.12%     5.63%  [kernel]              [k] vfree.part.0
+   30.59%     0.00%  [kernel]              [k] ret_from_fork_asm
+   30.59%     0.00%  [kernel]              [k] ret_from_fork
+   30.59%     0.00%  [kernel]              [k] kthread
+   28.79%     0.00%  [test_vmalloc]        [k] test_func
+   17.90%    17.88%  [kernel]              [k] pfn_valid
+   13.24%     0.02%  [kernel]              [k] remove_vm_area
+   10.90%    10.68%  [kernel]              [k] __vunmap_range_noflush
+   10.81%    10.80%  [kernel]              [k] free_pages_bulk
+    7.09%     0.51%  [kernel]              [k] alloc_pages_noprof
+    6.58%     0.41%  [kernel]              [k] alloc_pages_mpol
+    6.50%     0.30%  [kernel]              [k] free_frozen_pages_bulk
+    5.74%     0.97%  [kernel]              [k] __alloc_frozen_pages_noprof
+    5.70%     0.00%  [kernel]              [k] worker_thread
+    5.62%     0.02%  [kernel]              [k] process_one_work
+    5.57%     0.01%  [kernel]              [k] __purge_vmap_area_lazy
+    4.76%     2.55%  [kernel]              [k] get_page_from_freelist

So it is nice :)

--
Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ