[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3a99961c-6eda-434a-8721-58eaf8b989db@linux.ibm.com>
Date: Thu, 3 Jul 2025 20:00:46 +0530
From: Donet Tom <donettom@...ux.ibm.com>
To: Zi Yan <ziy@...dia.com>
Cc: David Hildenbrand <david@...hat.com>,
Aboorva Devarajan <aboorvad@...ux.ibm.com>, akpm@...ux-foundation.org,
Liam.Howlett@...cle.com, lorenzo.stoakes@...cle.com, shuah@...nel.org,
pfalcato@...e.de, baolin.wang@...ux.alibaba.com, npache@...hat.com,
ryan.roberts@....com, dev.jain@....com, baohua@...nel.org,
linux-mm@...ck.org, linux-kselftest@...r.kernel.org,
linux-kernel@...r.kernel.org, ritesh.list@...il.com
Subject: Re: [PATCH v2 4/7] mm/selftests: Fix split_huge_page_test failure on
systems with 64KB page size
On 7/3/25 7:51 PM, Zi Yan wrote:
> On 3 Jul 2025, at 4:58, Donet Tom wrote:
>
>> On 7/3/25 1:52 PM, David Hildenbrand wrote:
>>> On 03.07.25 08:06, Aboorva Devarajan wrote:
>>>> From: Donet Tom <donettom@...ux.ibm.com>
>>>>
>>>> The split_huge_page_test fails on systems with a 64KB base page size.
>>>> This is because the order of a 2MB huge page is different:
>>>>
>>>> On 64KB systems, the order is 5.
>>>>
>>>> On 4KB systems, it's 9.
>>>>
>>>> The test currently assumes a maximum huge page order of 9, which is only
>>>> valid for 4KB base page systems. On systems with 64KB pages, attempting
>>>> to split huge pages beyond their actual order (5) causes the test to fail.
>>>>
>>>> In this patch, we calculate the huge page order based on the system's base
>>>> page size. With this change, the tests now run successfully on both 64KB
>>>> and 4KB page size systems.
>>>>
>>>> Fixes: fa6c02315f745 ("mm: huge_memory: a new debugfs interface for splitting THP tests")
>>>> Signed-off-by: Donet Tom <donettom@...ux.ibm.com>
>>>> Signed-off-by: Aboorva Devarajan <aboorvad@...ux.ibm.com>
>>>> ---
>>>> .../selftests/mm/split_huge_page_test.c | 23 ++++++++++++++-----
>>>> 1 file changed, 17 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c
>>>> index aa7400ed0e99..38296a758330 100644
>>>> --- a/tools/testing/selftests/mm/split_huge_page_test.c
>>>> +++ b/tools/testing/selftests/mm/split_huge_page_test.c
>>>> @@ -514,6 +514,15 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc,
>>>> }
>>>> }
>>>> +static unsigned int get_order(unsigned int pages)
>>>> +{
>>>> + unsigned int order = 0;
>>>> +
>>>> + while ((1U << order) < pages)
>>>> + order++;
>>>> + return order;
>>>> +}
>>> I think this can simply be
>>>
>>> return 32 - __builtin_clz(pages - 1);
>>>
>>> That mimics what get_order() in the kernel does for BITS_PER_LONG == 32
>>>
>>> or simpler
>>>
>>> return 31 - __builtin_clz(pages);
>>>
>>> E.g., if pages=512, you get 31-22=9
>>
>> Sure David, We will change it.
>>
>>
>>>> +
>>>> int main(int argc, char **argv)
>>>> {
>>>> int i;
>>>> @@ -523,6 +532,7 @@ int main(int argc, char **argv)
>>>> const char *fs_loc;
>>>> bool created_tmp;
>>>> int offset;
>>>> + unsigned int max_order;
>>>> ksft_print_header();
>>>> @@ -534,32 +544,33 @@ int main(int argc, char **argv)
>>>> if (argc > 1)
>>>> optional_xfs_path = argv[1];
>>>> - ksft_set_plan(1+8+1+9+9+8*4+2);
>>>> -
>>>> pagesize = getpagesize();
>>>> pageshift = ffs(pagesize) - 1;
>>>> pmd_pagesize = read_pmd_pagesize();
>>>> if (!pmd_pagesize)
>>>> ksft_exit_fail_msg("Reading PMD pagesize failed\n");
>>>> + max_order = get_order(pmd_pagesize/pagesize);
>>>> + ksft_set_plan(1+(max_order-1)+1+max_order+max_order+(max_order-1)*4+2);
>>> Wow. Can we simplify that in any sane way?
>>
>> It is counting test by test. Let me try to simplify it and send the next version.
> Yeah, I did that (ksft_set_plan(1+8+1+9+9+8*4+2);) to count different tests
> separately and in the same order as the following tests, so that I could
> get ksft_set_plan number right when adding or removing tests. Maybe it is
> fine to just sum them up now.
Sure. Thank you
>
> Best Regards,
> Yan, Zi
>
Powered by blists - more mailing lists