lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251203181934.GB478168@cmpxchg.org>
Date: Wed, 3 Dec 2025 13:19:34 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Gregory Price <gourry@...rry.net>
Cc: linux-mm@...ck.org, kernel-team@...a.com, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org, vbabka@...e.cz, surenb@...gle.com,
	mhocko@...e.com, jackmanb@...gle.com, ziy@...dia.com,
	kas@...nel.org, dave.hansen@...ux.intel.com,
	rick.p.edgecombe@...el.com, muchun.song@...ux.dev,
	osalvador@...e.de, david@...hat.com, x86@...nel.org,
	linux-coco@...ts.linux.dev, kvm@...r.kernel.org,
	Wei Yang <richard.weiyang@...il.com>,
	David Rientjes <rientjes@...gle.com>,
	Joshua Hahn <joshua.hahnjy@...il.com>
Subject: Re: [PATCH v4] page_alloc: allow migration of smaller hugepages
 during contig_alloc

On Wed, Dec 03, 2025 at 12:53:12PM -0500, Gregory Price wrote:
> On Wed, Dec 03, 2025 at 12:32:09PM -0500, Johannes Weiner wrote:
> > The reason I'm bringing this up is because this function overall looks
> > kind of unnecessary. Page isolation checks all of these conditions
> > already, and arbitrates huge pages on hugepage_migration_supported() -
> > which seems to be the semantics you also desire here.
> > 
> > Would it make sense to just remove pfn_range_valid_contig()?
> 
> This seems like a pretty clear optimization that was added at some point
> to prevent incurring the cost of starting to isolate 512MB of pages and
> then having to go undo it because it ran into a single huge page.
> 
>         for_each_zone_zonelist_nodemask(zone, z, zonelist,
>                                         gfp_zone(gfp_mask), nodemask) {
> 
>                 spin_lock_irqsave(&zone->lock, flags);
>                 pfn = ALIGN(zone->zone_start_pfn, nr_pages);
>                 while (zone_spans_last_pfn(zone, pfn, nr_pages)) {
>                         if (pfn_range_valid_contig(zone, pfn, nr_pages)) {
> 
>                                 spin_unlock_irqrestore(&zone->lock, flags);
>                                 ret = __alloc_contig_pages(pfn, nr_pages,
>                                                         gfp_mask);
>                                 spin_lock_irqsave(&zone->lock, flags);
> 
>                         }
>                         pfn += nr_pages;
>                 }
>                 spin_unlock_irqrestore(&zone->lock, flags);
>         }
> 
> and then
> 
> __alloc_contig_pages
> 	ret = start_isolate_page_range(start, end, mode);
> 
> This is called without pre-checking the range for unmovable pages.
> 
> Seems dangerous to remove without significant data.

Fair enough. It just caught my eye that the page allocator is running
all the same checks as page isolation itself.

I agree that a quick up front check is useful before updating hundreds
of page blocks, then failing and unrolling on the last one. Arguably
that should just be part of the isolation code, though, not a random
callsite. But that move is better done in a separate patch.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ