[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cb52312d-348b-49d5-b0d7-0613fb38a558@redhat.com>
Date: Fri, 16 May 2025 14:21:04 +0200
From: David Hildenbrand <david@...hat.com>
To: Pankaj Raghav <p.raghav@...sung.com>, "Darrick J . Wong"
<djwong@...nel.org>, hch@....de, willy@...radead.org
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, mcgrof@...nel.org, gost.dev@...sung.com,
Andrew Morton <akpm@...ux-foundation.org>, kernel@...kajraghav.com
Subject: Re: [RFC 1/3] mm: add large zero page for efficient zeroing of larger
segments
On 16.05.25 12:10, Pankaj Raghav wrote:
> Introduce LARGE_ZERO_PAGE of size 2M as an alternative to ZERO_PAGE of
> size PAGE_SIZE.
>
> There are many places in the kernel where we need to zeroout larger
> chunks but the maximum segment we can zeroout at a time is limited by
> PAGE_SIZE.
>
> This is especially annoying in block devices and filesystems where we
> attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
> bvec support in block layer, it is much more efficient to send out
> larger zero pages as a part of single bvec.
>
> While there are other options such as huge_zero_page, they can fail
> based on the system memory pressure requiring a fallback to ZERO_PAGE[3].
Instead of adding another one, why not have a config option that will
always allocate the huge zeropage, and never free it?
I mean, the whole thing about dynamically allocating/freeing it was for
memory-constrained systems. For large systems, we just don't care.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists