lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bb4a2b21-b864-4afe-8aac-963e55c9d74d@arm.com>
Date: Tue, 6 Jan 2026 11:04:57 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
 David Hildenbrand <david@...nel.org>,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
 "Liam R. Howlett" <Liam.Howlett@...cle.com>, Vlastimil Babka
 <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>,
 Suren Baghdasaryan <surenb@...gle.com>, Michal Hocko <mhocko@...e.com>,
 Brendan Jackman <jackmanb@...gle.com>, Johannes Weiner <hannes@...xchg.org>,
 Zi Yan <ziy@...dia.com>, Uladzislau Rezki <urezki@...il.com>,
 "Vishal Moola (Oracle)" <vishal.moola@...il.com>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 2/2] vmalloc: Optimize vfree

On 06/01/2026 04:36, Matthew Wilcox wrote:
> On Mon, Jan 05, 2026 at 04:17:38PM +0000, Ryan Roberts wrote:
>> +	if (vm->nr_pages) {
>> +		start_pfn = page_to_pfn(vm->pages[0]);
>> +		nr = 1;
>> +		for (i = 1; i < vm->nr_pages; i++) {
>> +			unsigned long pfn = page_to_pfn(vm->pages[i]);
>> +
>> +			if (start_pfn + nr != pfn) {
>> +				__free_contig_range(start_pfn, nr);
>> +				start_pfn = pfn;
>> +				nr = 1;
>> +				cond_resched();
>> +			} else {
>> +				nr++;
>> +			}
> 
> It kind of feels like __free_contig_range() and this routine do the same
> thing -- iterate over each page and make sure that it's compatible with
> being freed.  What if we did ...

__free_contig_range() as I implemented it is common to vfree() and
free_contig_range() so more users benefit from the optimization. If we move
put_page_testzero() into vfree() we would also need a loop in
free_contig_range() to do the same thing.

Additionally where do you propose to put free_pages_prepare()? That's currently
handled by the loop in __free_contig_range() for my implementation. I don't
think we want to export that outside of page_alloc.c really. Zi was suggesting
the long term solution might be to make free_pages_prepare() "contiguous range
of order-0 pages" aware, but that's a future improvement I wasn't planning to do
here, so currently it needs to be called for each order-0 page.

> 
> +	for (i = 0; i < vm->nr_pages; i++) {
> +		struct page *page = vm->pages[i];
> +
> +		if (!put_page_testzero(page)) {
> +			__free_frozen_contig_pages(start_page, nr);
> +			nr = 0;
> +			continue;
> +		}
> +
> +		if (!nr) {
> +			start_page = page;
> +			nr = 1;
> +			continue;
> +		}
> +
> +		if (start_page + nr != page) {

It was my understanding that a contiguous run of PFNs guarrantees a
corresponding contiguous run of struct pages, but not vice versa; I thought
there was a memory model where holes in PFNs were closed in the vmemmap meaning
that just because 2 struct pages are virtually contiguous that doesn't mean the
PFNs are physically contiguous? That's why I was using PFN here.

Perhaps I'm wrong?

Thanks,
Ryan

> +			__free_frozen_contig_pages(start_page, nr);
> +			start_page = page;
> +			nr = 1;
> +			cond_resched();
> +		} else {
> +			nr++;
> +		}
> +	}
> +
> +	__free_frozen_contig_pages(start_page, nr);
> 
> That way we don't need to mess around with returning the number of pages
> not freed.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ