[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250812155512.926011-1-ziy@nvidia.com>
Date: Tue, 12 Aug 2025 11:55:08 -0400
From: Zi Yan <ziy@...dia.com>
To: Wei Yang <richard.weiyang@...il.com>,
wang lian <lianux.mm@...il.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
David Hildenbrand <david@...hat.com>,
linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Zi Yan <ziy@...dia.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Rapoport <rppt@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>,
Shuah Khan <shuah@...nel.org>,
linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: [PATCH v3 0/4] Better split_huge_page_test result check
This patchset uses kpageflags to get after-split folio orders for a better
split_huge_page_test result check[1]. The added gather_folio_orders() scans
through a VPN range and collects the numbers of folios at different orders.
check_folio_orders() compares the result of gather_folio_orders() to
a given list of numbers of different orders.
This patchset also added new order and in folio offset to the split huge
page debugfs's pr_debug()s;
Changelog
===
>From V2[3]:
1. Added two missing free()s in check_folio_orders().
2. Reimplemented is_backed_by_thp() to use kpageflags to get precise
folio order information and renamed it to is_backed_by_folio() in new
Patch 3.
3. Renamed *_file to *_fd in Patch 2.
4. Indentation fixes.
5. Fixed vaddr stepping issue in gather_folio_orders() when a compound
tail page is encountered.
6. Used pmd_order in place of max_order in split_huge_page_test.c.
7. Documented gather_folio_orders().
>From V1[2]:
1. Dropped split_huge_pages_pid() for loop step change to avoid messing
up with PTE-mapped THP handling. split_huge_page_test.c is changed to
perform split at [addr, addr + pagesize) range to limit one
folio_split() per folio.
2. Moved pr_debug changes in Patch 2 to Patch 1.
3. Moved KPF_* to vm_util.h and used PAGEMAP_PFN instead of local PFN_MASK.
4. Used pagemap_get_pfn() helper.
5. Used char *vaddr and size_t len as inputs to gather_folio_orders() and
check_folio_orders() instead of vpn and nr_pages.
6. Removed variable length variables and used malloc instead.
[1] https://lore.kernel.org/linux-mm/e2f32bdb-e4a4-447c-867c-31405cbba151@redhat.com/
[2] https://lore.kernel.org/linux-mm/20250806022045.342824-1-ziy@nvidia.com/
[3] https://lore.kernel.org/linux-mm/20250808190144.797076-1-ziy@nvidia.com/
Zi Yan (4):
mm/huge_memory: add new_order and offset to split_huge_pages*()
pr_debug.
selftests/mm: add check_folio_orders() helper.
selftests/mm: reimplement is_backed_by_thp() with more precise check
selftests/mm: check after-split folio orders in split_huge_page_test.
mm/huge_memory.c | 8 +-
.../selftests/mm/split_huge_page_test.c | 154 +++++++++++-----
tools/testing/selftests/mm/vm_util.c | 173 ++++++++++++++++++
tools/testing/selftests/mm/vm_util.h | 8 +
4 files changed, 292 insertions(+), 51 deletions(-)
--
2.47.2
Powered by blists - more mailing lists