[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3tqmyo3qqaykszxmrmkaa3fo5hndc4ok6xrxozjvlmq5qjv4cs@2geqqedyfzcf>
Date: Mon, 2 Dec 2024 22:58:04 +0100
From: Mateusz Guzik <mjguzik@...il.com>
To: Frank van der Linden <fvdl@...gle.com>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org,
Muchun Song <muchun.song@...ux.dev>, Miaohe Lin <linmiaohe@...wei.com>,
Oscar Salvador <osalvador@...e.de>, David Hildenbrand <david@...hat.com>,
Peter Xu <peterx@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/hugetlb: optionally pre-zero hugetlb pages
On Mon, Dec 02, 2024 at 08:20:58PM +0000, Frank van der Linden wrote:
> Fresh hugetlb pages are zeroed out when they are faulted in,
> just like with all other page types. This can take up a good
> amount of time for larger page sizes (e.g. around 40
> milliseconds for a 1G page on a recent AMD-based system).
>
> This normally isn't a problem, since hugetlb pages are typically
> mapped by the application for a long time, and the initial
> delay when touching them isn't much of an issue.
>
> However, there are some use cases where a large number of hugetlb
> pages are touched when an application (such as a VM backed by these
> pages) starts. For 256 1G pages and 40ms per page, this would take
> 10 seconds, a noticeable delay.
The current huge page zeroing code is not that great to begin with.
There was a patchset posted some time ago to remedy at least some of it:
https://lore.kernel.org/all/20230830184958.2333078-1-ankur.a.arora@oracle.com/
but it apparently fell through the cracks.
Any games with "background zeroing" are notoriously crappy and I would
argue one should exhaust other avenues before going there -- at the end
of the day the cost of zeroing will have to get paid.
To that end I would suggest picking up the patchset and experimenting
with more variants of the zeroing code (for example for 1G it may be it
is faster to employ SIMD usage in the routine).
If this is really such a problem I wonder if this could start as a
series of 2MB pages instead faulted as needed, eventually promoted to
1G after passing some threshold?
Powered by blists - more mailing lists