lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260113063929.29767-1-lizhe.67@bytedance.com>
Date: Tue, 13 Jan 2026 14:39:28 +0800
From: "Li Zhe" <lizhe.67@...edance.com>
To: <ankur.a.arora@...cle.com>
Cc: <akpm@...ux-foundation.org>, <david@...nel.org>, <fvdl@...gle.com>, 
	<joao.m.martins@...cle.com>, <linux-kernel@...r.kernel.org>, 
	<linux-mm@...ck.org>, <lizhe.67@...edance.com>, <mhocko@...e.com>, 
	<mjguzik@...il.com>, <muchun.song@...ux.dev>, <osalvador@...e.de>, 
	<raghavendra.kt@....com>
Subject: Re: [PATCH v2 0/8] Introduce a huge-page pre-zeroing mechanism

On Mon, 12 Jan 2026 14:00:23 -0800, ankur.a.arora@...cle.com wrote:

> > Regarding concern (3), I am aware that QEMU has implemented a parallel
> > page-touch mechanism, which does reduce VM creation time; nevertheless,
> > in our measurements it still consumes a non-trivial amount of time.
> > (According to feedback from QEMU colleagues, bringing up a 2 TB VM
> > still requires more than 40 seconds for zeroing)
> >
> >> > Fresh hugetlb pages are zeroed out when they are faulted in,
> >> > just like with all other page types. This can take up a good
> >> > amount of time for larger page sizes (e.g. around 250
> >> > milliseconds for a 1G page on a Skylake machine).
> >> >
> >> > This normally isn't a problem, since hugetlb pages are typically
> >> > mapped by the application for a long time, and the initial
> >> > delay when touching them isn't much of an issue.
> >> >
> >> > However, there are some use cases where a large number of hugetlb
> >> > pages are touched when an application starts (such as a VM backed
> >> > by these pages), rendering the launch noticeably slow.
> >> >
> >> > On an Skylake platform running v6.19-rc2, faulting in 64 × 1 GB huge
> >> > pages takes about 16 seconds, roughly 250 ms per page. Even with
> >> > Ankur's optimizations[2], the time drops only to ~13 seconds,
> >> > ~200 ms per page, still a noticeable delay.
> >
> > As for concern (4), I believe it is orthogonal to this patchset, and
> > the cover letter already contains a performance comparison that
> > demonstrates the additional benefit.
> 
> That comparison isn't quite apples to apples though. In the fault
> workoad above, you are looking at single threaded zeroing but
> realistically clearing pages at VM init is multi-threaded (QEMU does
> that as David describes).
> 
> Also Skylake has probably one of the slowest REP; STOS implementations
> I've tried.

Hi ankur, thanks for your reply.

The test above merely offers a straightforward comparison of
page-clearing speeds. Its sole purpose is to demonstrate that the
current zeroing phase remains excessively time-consuming.

Even with multi-threaded clearing(QEMU caps the number of concurrent
zeroing threads at 16), booting a 2-TB VM still spends over 40 seconds
on zeroing. Based on the single-threaded test results, it can be
reasonably inferred that even after the clear_page optimization
patches are merged, a substantial amount of time will still be spent
on page zeroing when bringing up a large-scale VM.

Thanks,
Zhe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ