lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <83798495-915b-4a5d-9638-f5b3de913b71@kernel.org>
Date: Thu, 15 Jan 2026 12:08:03 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Li Zhe <lizhe.67@...edance.com>
Cc: akpm@...ux-foundation.org, ankur.a.arora@...cle.com, fvdl@...gle.com,
 joao.m.martins@...cle.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
 mhocko@...e.com, mjguzik@...il.com, muchun.song@...ux.dev,
 osalvador@...e.de, raghavendra.kt@....com
Subject: Re: [PATCH v2 0/8] Introduce a huge-page pre-zeroing mechanism

On 1/15/26 10:36, Li Zhe wrote:
> On Wed, 14 Jan 2026 18:21:08 +0100, david@...nel.org wrote:
>    
>>>> But again, I think the main motivation here is "increase application
>>>> startup", not optimize that the zeroing happens at specific points in
>>>> time during system operation (e.g., when idle etc).
>>>>
>>>
>>> Framing this as "increase application startup" and merely shifting the
>>> overhead to shutdown seems like gaming the problem statement to me.
>>> The real problem is total real time spent on it while pages are
>>> needed.
>>>
>>> Support for background zeroing can give you more usable pages provided
>>> it has the cpu + ram to do it. If it does not, you are in the worst
>>> case in the same spot as with zeroing on free.
>>>
>>> Let's take a look at some examples.
>>>
>>> Say there are no free huge pages and you kill a vm + start a new one.
>>> On top of that all CPUs are pegged as is. In this case total time is
>>> the same for "zero on free" as it is for background zeroing.
>>
>> Right. If the pages get freed to immediately get allocated again, it
>> doesn't really matter who does the freeing. There might be some details,
>> of course.
>>
>>>
>>> Say the system is freshly booted and you start up a vm. There are no
>>> pre-zeroed pages available so it suffers at start time no matter what.
>>> However, with some support for background zeroing, the machinery could
>>> respond to demand and do it in parallel in some capacity, shortening
>>> the real time needed.
>>
>> Just like for init_on_free, I would start with zeroing these pages
>> during boot.
>>
>> init_on_free assures that all pages in the buddy were zeroed out. Which
>> greatly simplifies the implementation, because there is no need to track
>> what was initialized and what was not.
>>
>> It's a good question if initialization during that should be done in
>> parallel, possibly asynchronously during boot. Reminds me a bit of
>> deferred page initialization during boot. But that is rather an
>> extension that could be added somewhat transparently on top later.
>>
>> If ever required we could dynamically enable this setting for a running
>> system. Whoever would enable it (flips the magic toggle) would zero out
>> all hugetlb pages that are already in the hugetlb allocator as free, but
>> not initialized yet.
>>
>> But again, these are extensions on top of the basic design of having all
>> free hugetlb folios be zeroed.
>>
>>>
>>> Say a little bit of real time passes and you start another vm. With
>>> merely zeroing on free there are still no pre-zeroed pages available
>>> so it again suffers the overhead. With background zeroing some of the
>>> that memory would be already sorted out, speeding up said startup.
>>
>> The moment they end up in the hugetlb allocator as free folios they
>> would have to get initialized.
>>
>> Now, I am sure there are downsides to this approach (how to speedup
>> process exit by parallelizing zeroing, if ever required)? But it sounds
>> like being a bit ... simpler without user space changes required. In
>> theory :)
> 
> I strongly agree that init_on_free strategy effectively eliminates the
> latency incurred during VM creation. However, it appears to introduce
> two new issues.
> 
> First, the process that later allocates a page may not be the one that
> freed it, raising the question of which process should bear the cost
> of zeroing.

Right now the cost is payed by the process that allocates a page. If you
shift that to the freeing path, it's still the same process, just at a
different point in time.

Of course, there are exceptions to that: if you have a hugetlb file that
is shared by multiple processes (-> process that essentially truncates
the file). Or if someone (GUP-pin) holds a reference to a file even after
it was truncated (not common but possible).

With CoW it would be the process that last unmaps the folio. CoW with
hugetlb is fortunately something that is rare (and rather shaky :) ).

> 
> Second, put_page() is executed atomically, making it inappropriate to
> invoke clear_page() within that context; off-loading the zeroing to a
> workqueue merely reopens the same accounting problem.

I thought about this as well. For init_on_free we always invoke it for
up to 4MiB folios during put_page() on x86-64.

See __folio_put()->__free_frozen_pages()->free_pages_prepare()

Where we call kernel_init_pages(page, 1 << order);

So surely, for 2 MiB folios (hugetlb) this is not a problem.

... but then, on arm64 with 64k base pages we have 512 MiB folios
(managed by the buddy!) where this is apparently not a problem? Or is
it and should be fixed?

So I would expect once we go up to 1 GiB, we might only reveal more
areas where we should have optimized in the first case by dropping
the reference outside the spin lock ... and these optimizations would
obviously (unless in hugetlb specific code ...) benefit init_on_free
setups as well (and page poisoning).


Looking at __unmap_hugepage_range(), for example, we already make sure
to not drop the reference while holding the PTL (spinlock).

In general, I think when using MMU gather we drop folio references out
of the PTL, because we know that it can hurt performance badly.

I documented some of the nasty things that can happen with MMU gather in

commit e61abd4490684de379b4a2ef1be2dbde39ac1ced
Author: David Hildenbrand <david@...nel.org>
Date:   Wed Feb 14 21:44:34 2024 +0100

     mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing
     
     In tlb_batch_pages_flush(), we can end up freeing up to 512 pages or now
     up to 256 folio fragments that span more than one page, before we
     conditionally reschedule.
     
     It's a pain that we have to handle cond_resched() in
     tlb_batch_pages_flush() manually and cannot simply handle it in
     release_pages() -- release_pages() can be called from atomic context.
     Well, in a perfect world we wouldn't have to make our code more
     complicated at all.
     
     With page poisoning and init_on_free, we might now run into soft lockups
     when we free a lot of rather large folio fragments, because page freeing
     time then depends on the actual memory size we are freeing instead of on
     the number of folios that are involved.
     
     In the absolute (unlikely) worst case, on arm64 with 64k we will be able
     to free up to 256 folio fragments that each span 512 MiB: zeroing out 128
     GiB does sound like it might take a while.  But instead of ignoring this
     unlikely case, let's just handle it.


But more general, when dealing with the PTL we try to put folio references outside
the lock (there are some cases in mm/memory.c where we apparently don't do it yet),
because freeing memory can take a while.

-- 
Cheers

David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ