lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9daa39e6-9653-45cc-8c00-abf5f3bae974@kernel.org>
Date: Wed, 14 Jan 2026 18:21:08 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: Li Zhe <lizhe.67@...edance.com>, akpm@...ux-foundation.org,
 ankur.a.arora@...cle.com, fvdl@...gle.com, joao.m.martins@...cle.com,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org, mhocko@...e.com,
 muchun.song@...ux.dev, osalvador@...e.de, raghavendra.kt@....com
Subject: Re: [PATCH v2 0/8] Introduce a huge-page pre-zeroing mechanism

>> But again, I think the main motivation here is "increase application
>> startup", not optimize that the zeroing happens at specific points in
>> time during system operation (e.g., when idle etc).
>>
> 
> Framing this as "increase application startup" and merely shifting the
> overhead to shutdown seems like gaming the problem statement to me.
> The real problem is total real time spent on it while pages are
> needed.
> 
> Support for background zeroing can give you more usable pages provided
> it has the cpu + ram to do it. If it does not, you are in the worst
> case in the same spot as with zeroing on free.
> 
> Let's take a look at some examples.
> 
> Say there are no free huge pages and you kill a vm + start a new one.
> On top of that all CPUs are pegged as is. In this case total time is
> the same for "zero on free" as it is for background zeroing.

Right. If the pages get freed to immediately get allocated again, it 
doesn't really matter who does the freeing. There might be some details, 
of course.

> 
> Say the system is freshly booted and you start up a vm. There are no
> pre-zeroed pages available so it suffers at start time no matter what.
> However, with some support for background zeroing, the machinery could
> respond to demand and do it in parallel in some capacity, shortening
> the real time needed.

Just like for init_on_free, I would start with zeroing these pages 
during boot.

init_on_free assures that all pages in the buddy were zeroed out. Which 
greatly simplifies the implementation, because there is no need to track 
what was initialized and what was not.

It's a good question if initialization during that should be done in 
parallel, possibly asynchronously during boot. Reminds me a bit of 
deferred page initialization during boot. But that is rather an 
extension that could be added somewhat transparently on top later.

If ever required we could dynamically enable this setting for a running 
system. Whoever would enable it (flips the magic toggle) would zero out 
all hugetlb pages that are already in the hugetlb allocator as free, but 
not initialized yet.

But again, these are extensions on top of the basic design of having all 
free hugetlb folios be zeroed.

> 
> Say a little bit of real time passes and you start another vm. With
> merely zeroing on free there are still no pre-zeroed pages available
> so it again suffers the overhead. With background zeroing some of the
> that memory would be already sorted out, speeding up said startup.

The moment they end up in the hugetlb allocator as free folios they 
would have to get initialized.

Now, I am sure there are downsides to this approach (how to speedup 
process exit by parallelizing zeroing, if ever required)? But it sounds 
like being a bit ... simpler without user space changes required. In 
theory :)

-- 
Cheers

David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ