lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f474ef7b-71f3-4169-ba30-74f4afbaeee6@redhat.com>
Date: Wed, 2 Jul 2025 10:28:51 +0200
From: David Hildenbrand <david@...hat.com>
To: Muhammad Usama Anjum <usama.anjum@...labora.com>,
 Andrew Morton <akpm@...ux-foundation.org>, Shuah Khan <shuah@...nel.org>
Cc: kernel@...labora.com, linux-mm@...ck.org,
 linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] selftests/mm: pagemap_scan ioctl: add PFN ZERO test cases

On 02.07.25 09:39, Muhammad Usama Anjum wrote:
> On 7/1/25 7:51 PM, David Hildenbrand wrote:
>> On 30.06.25 12:24, Muhammad Usama Anjum wrote:
>>> Add test cases to test the correctness of PFN ZERO flag of pagemap_scan
>>> ioctl. Test with normal pages backed memory and huge pages backed
>>> memory.
>>
>> Just to verify: would this trigger on kernels before my fix?
> Yes, it does trigger the bug without the fix.
> 
>>
>>>
>>> Cc: David Hildenbrand <david@...hat.com>
>>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@...labora.com>
>>> ---
>>> The bug has been fixed [1].
>>>
>>> [1] https://lore.kernel.org/all/20250617143532.2375383-1-david@redhat.com
>>> ---
>>>    tools/testing/selftests/mm/pagemap_ioctl.c | 57 +++++++++++++++++++++-
>>>    1 file changed, 56 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/tools/testing/selftests/mm/pagemap_ioctl.c b/tools/
>>> testing/selftests/mm/pagemap_ioctl.c
>>> index 57b4bba2b45f3..6138de0087edf 100644
>>> --- a/tools/testing/selftests/mm/pagemap_ioctl.c
>>> +++ b/tools/testing/selftests/mm/pagemap_ioctl.c
>>> @@ -1,4 +1,5 @@
>>>    // SPDX-License-Identifier: GPL-2.0
>>> +
>>>    #define _GNU_SOURCE
>>>    #include <stdio.h>
>>>    #include <fcntl.h>
>>> @@ -1480,6 +1481,57 @@ static void transact_test(int page_size)
>>>                      extra_thread_faults);
>>>    }
>>>    +void zeropfn_tests(void)
>>> +{
>>> +    unsigned long long mem_size;
>>> +    struct page_region vec;
>>> +    int i, ret;
>>> +    char *mem;
>>> +
>>> +    /* Test with page backed memory */
>>
>> What is "page backed memory" ? :)
> I mean, normal memory which isn't huge page backed. I've renamed it to
> Test with normal memory.
> 
>>
>>> +    mem_size = 10 * page_size;
>>> +    mem = mmap(NULL, mem_size, PROT_READ, MAP_PRIVATE | MAP_ANON, -1,
>>> 0);
>>> +    if (mem == MAP_FAILED)
>>> +        ksft_exit_fail_msg("error nomem\n");
>>> +
>>> +    /* Touch each page to ensure it's mapped */
>>> +    for (i = 0; i < mem_size; i += page_size)
>>> +        (void)((volatile char *)mem)[i];
>>> +
>>> +    ret = pagemap_ioctl(mem, mem_size, &vec, 1, 0,
>>> +                (mem_size / page_size), PAGE_IS_PFNZERO, 0, 0,
>>> PAGE_IS_PFNZERO);
>>> +    if (ret < 0)
>>> +        ksft_exit_fail_msg("error %d %d %s\n", ret, errno,
>>> strerror(errno));
>>> +
>>> +    ksft_test_result(ret == 1 && LEN(vec) == (mem_size / page_size),
>>> +             "%s all pages must have PFNZERO set\n", __func__);
>>> +
>>> +    munmap(mem, mem_size);
>>> +
>>> +    /* Test with huge page */
>>> +    mem_size = 10 * hpage_size;
>>> +    mem = memalign(hpage_size, mem_size);
>>> +    if (!mem)
>>> +        ksft_exit_fail_msg("error nomem\n");
>>> +
>>> +    ret = madvise(mem, mem_size, MADV_HUGEPAGE);
>>> +    if (ret)
>>> +        ksft_exit_fail_msg("madvise failed %d %s\n", errno,
>>> strerror(errno));
>>
>> Might fail on older kernels, so we usually treat this as a skip.
> I'll skip it in next version.
> 
>>
>>> +
>>> +    for (i = 0; i < mem_size; i += hpage_size)
>>> +        (void)((volatile char *)mem)[i];
>>> +
>>> +    ret = pagemap_ioctl(mem, mem_size, &vec, 1, 0,
>>> +                (mem_size / page_size), PAGE_IS_PFNZERO, 0, 0,
>>> PAGE_IS_PFNZERO);
>>> +    if (ret < 0)
>>> +        ksft_exit_fail_msg("error %d %d %s\n", ret, errno,
>>> strerror(errno));
>>> +
>>> +    ksft_test_result(ret == 1 && LEN(vec) == (mem_size / page_size),
>>> +             "%s all huge pages must have PFNZERO set\n", __func__);
>>
>> Wouldn't this be able to fail if /sys/kernel/mm/transparent_hugepage/
>> use_zero_page is set to false,
> I wasn't aware of it. I'll check user_zero_page first as initial condition.
> 
>> or if mmap() gave us a suboptimally-
>> aligned range?
> mem = memalign(hpage_size, mem_size) is being used to allocate this. So
> aligment should be correct.
> 
>>
>> You'd have to read each and every page to get the ordinary shared
>> zeropage in these configs instead without making the test too complicated.
> In the above for loop, we are reading each new page already. Let's
> resolve this and then I'll post the v2 which is ready.
> 
>>
>>> +
>>> +    free(mem);
>>
>>
>> Shouldn't this be an munmap() ?
> free() is being used to free memory allocated by memalign().

Oh, I missed that detail.

I'm afraid memalign+free that might not be what you want: there is no 
guarantee that what you are getting hasn't been used before and is not 
already filled with other pages?

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ