[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250527050452.817674-1-p.raghav@samsung.com>
Date: Tue, 27 May 2025 07:04:49 +0200
From: Pankaj Raghav <p.raghav@...sung.com>
To: Suren Baghdasaryan <surenb@...gle.com>,
Ryan Roberts <ryan.roberts@....com>,
Vlastimil Babka <vbabka@...e.cz>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Borislav Petkov <bp@...en8.de>,
Ingo Molnar <mingo@...hat.com>,
"H . Peter Anvin" <hpa@...or.com>,
Zi Yan <ziy@...dia.com>,
Mike Rapoport <rppt@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
David Hildenbrand <david@...hat.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Nico Pache <npache@...hat.com>,
Dev Jain <dev.jain@....com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Jens Axboe <axboe@...nel.dk>
Cc: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
linux-block@...r.kernel.org,
willy@...radead.org,
x86@...nel.org,
linux-fsdevel@...r.kernel.org,
"Darrick J . Wong" <djwong@...nel.org>,
mcgrof@...nel.org,
gost.dev@...sung.com,
kernel@...kajraghav.com,
hch@....de,
Pankaj Raghav <p.raghav@...sung.com>
Subject: [RFC 0/3] add STATIC_PMD_ZERO_PAGE config option
There are many places in the kernel where we need to zeroout larger
chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
is limited by PAGE_SIZE.
This concern was raised during the review of adding Large Block Size support
to XFS[1][2].
This is especially annoying in block devices and filesystems where we
attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
bvec support in block layer, it is much more efficient to send out
larger zero pages as a part of a single bvec.
Some examples of places in the kernel where this could be useful:
- blkdev_issue_zero_pages()
- iomap_dio_zero()
- vmalloc.c:zero_iter()
- rxperf_process_call()
- fscrypt_zeroout_range_inline_crypt()
- bch2_checksum_update()
...
We already have huge_zero_folio that is allocated on demand, and it will be
deallocated by the shrinker if there are no users of it left.
But to use huge_zero_folio, we need to pass a mm struct and the
put_folio needs to be called in the destructor. This makes sense for
systems that have memory constraints but for bigger servers, it does not
matter if the PMD size is reasonable (like x86).
Add a config option STATIC_PMD_ZERO_PAGE that will always allocate
the huge_zero_folio, and it will never be freed. This makes using the
huge_zero_folio without having to pass any mm struct and a call to put_folio
in the destructor.
I have converted blkdev_issue_zero_pages() as an example as a part of
this series.
I will send patches to individual subsystems using the huge_zero_folio
once this gets upstreamed.
Looking forward to some feedback.
[1] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/
[2] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/
Changes since v1:
- Added the config option based on the feedback from David.
- Removed iomap patches so that I don't clutter this series with too
many subsystems.
Pankaj Raghav (3):
mm: move huge_zero_folio from huge_memory.c to memory.c
mm: add STATIC_PMD_ZERO_PAGE config option
block: use mm_huge_zero_folio in __blkdev_issue_zero_pages()
arch/x86/Kconfig | 1 +
block/blk-lib.c | 16 ++++--
include/linux/huge_mm.h | 16 ------
include/linux/mm.h | 16 ++++++
mm/Kconfig | 12 ++++
mm/huge_memory.c | 105 +---------------------------------
mm/memory.c | 121 ++++++++++++++++++++++++++++++++++++++++
7 files changed, 164 insertions(+), 123 deletions(-)
base-commit: f1f6aceb82a55f87d04e2896ac3782162e7859bd
--
2.47.2
Powered by blists - more mailing lists