[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d73f0591-e407-4350-9ddd-dc05ff571a8d@redhat.com>
Date: Thu, 3 Jul 2025 10:22:46 +0200
From: David Hildenbrand <david@...hat.com>
To: Aboorva Devarajan <aboorvad@...ux.ibm.com>, akpm@...ux-foundation.org,
Liam.Howlett@...cle.com, lorenzo.stoakes@...cle.com, shuah@...nel.org,
pfalcato@...e.de, ziy@...dia.com, baolin.wang@...ux.alibaba.com,
npache@...hat.com, ryan.roberts@....com, dev.jain@....com, baohua@...nel.org
Cc: linux-mm@...ck.org, linux-kselftest@...r.kernel.org,
linux-kernel@...r.kernel.org, donettom@...ux.ibm.com, ritesh.list@...il.com
Subject: Re: [PATCH v2 4/7] mm/selftests: Fix split_huge_page_test failure on
systems with 64KB page size
On 03.07.25 08:06, Aboorva Devarajan wrote:
> From: Donet Tom <donettom@...ux.ibm.com>
>
> The split_huge_page_test fails on systems with a 64KB base page size.
> This is because the order of a 2MB huge page is different:
>
> On 64KB systems, the order is 5.
>
> On 4KB systems, it's 9.
>
> The test currently assumes a maximum huge page order of 9, which is only
> valid for 4KB base page systems. On systems with 64KB pages, attempting
> to split huge pages beyond their actual order (5) causes the test to fail.
>
> In this patch, we calculate the huge page order based on the system's base
> page size. With this change, the tests now run successfully on both 64KB
> and 4KB page size systems.
>
> Fixes: fa6c02315f745 ("mm: huge_memory: a new debugfs interface for splitting THP tests")
> Signed-off-by: Donet Tom <donettom@...ux.ibm.com>
> Signed-off-by: Aboorva Devarajan <aboorvad@...ux.ibm.com>
> ---
> .../selftests/mm/split_huge_page_test.c | 23 ++++++++++++++-----
> 1 file changed, 17 insertions(+), 6 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c
> index aa7400ed0e99..38296a758330 100644
> --- a/tools/testing/selftests/mm/split_huge_page_test.c
> +++ b/tools/testing/selftests/mm/split_huge_page_test.c
> @@ -514,6 +514,15 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc,
> }
> }
>
> +static unsigned int get_order(unsigned int pages)
> +{
> + unsigned int order = 0;
> +
> + while ((1U << order) < pages)
> + order++;
> + return order;
> +}
I think this can simply be
return 32 - __builtin_clz(pages - 1);
That mimics what get_order() in the kernel does for BITS_PER_LONG == 32
or simpler
return 31 - __builtin_clz(pages);
E.g., if pages=512, you get 31-22=9
> +
> int main(int argc, char **argv)
> {
> int i;
> @@ -523,6 +532,7 @@ int main(int argc, char **argv)
> const char *fs_loc;
> bool created_tmp;
> int offset;
> + unsigned int max_order;
>
> ksft_print_header();
>
> @@ -534,32 +544,33 @@ int main(int argc, char **argv)
> if (argc > 1)
> optional_xfs_path = argv[1];
>
> - ksft_set_plan(1+8+1+9+9+8*4+2);
> -
> pagesize = getpagesize();
> pageshift = ffs(pagesize) - 1;
> pmd_pagesize = read_pmd_pagesize();
> if (!pmd_pagesize)
> ksft_exit_fail_msg("Reading PMD pagesize failed\n");
>
> + max_order = get_order(pmd_pagesize/pagesize);
> + ksft_set_plan(1+(max_order-1)+1+max_order+max_order+(max_order-1)*4+2);
Wow. Can we simplify that in any sane way?
> +
> fd_size = 2 * pmd_pagesize;
>
> split_pmd_zero_pages();
>
> - for (i = 0; i < 9; i++)
> + for (i = 0; i < max_order; i++)
> if (i != 1)
> split_pmd_thp_to_order(i);
>
> split_pte_mapped_thp();
> - for (i = 0; i < 9; i++)
> + for (i = 0; i < max_order; i++)
> split_file_backed_thp(i);
>
> created_tmp = prepare_thp_fs(optional_xfs_path, fs_loc_template,
> &fs_loc);
> - for (i = 8; i >= 0; i--)
> + for (i = (max_order-1); i >= 0; i--)
"i = max_order - 1"
> split_thp_in_pagecache_to_order_at(fd_size, fs_loc, i, -1);
>
> - for (i = 0; i < 9; i++)
> + for (i = 0; i < max_order; i++)
> for (offset = 0;
> offset < pmd_pagesize / pagesize;
> offset += MAX(pmd_pagesize / pagesize / 4, 1 << i))
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists