lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6e0be4f6-0ac2-4861-b25d-3d94c6f35a9f@arm.com>
Date: Wed, 7 May 2025 12:14:51 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>,
 David Hildenbrand <david@...hat.com>, Dev Jain <dev.jain@....com>,
 akpm@...ux-foundation.org, hughd@...gle.com
Cc: willy@...radead.org, 21cnbao@...il.com, ziy@...dia.com,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: mincore: use folio_pte_batch() to batch process
 large folios

On 07/05/2025 11:03, Baolin Wang wrote:
> 
> 
> On 2025/5/7 17:54, David Hildenbrand wrote:
>> On 07.05.25 11:48, Baolin Wang wrote:
>>>
>>>
>>> On 2025/5/7 13:12, Dev Jain wrote:
>>>>
>>>>
>>>> On 26/03/25 9:08 am, Baolin Wang wrote:
>>>>> When I tested the mincore() syscall, I observed that it takes longer with
>>>>> 64K mTHP enabled on my Arm64 server. The reason is the
>>>>> mincore_pte_range()
>>>>> still checks each PTE individually, even when the PTEs are contiguous,
>>>>> which is not efficient.
>>>>>
>>>>> Thus we can use folio_pte_batch() to get the batch number of the present
>>>>> contiguous PTEs, which can improve the performance. I tested the
>>>>> mincore()
>>>>> syscall with 1G anonymous memory populated with 64K mTHP, and observed an
>>>>> obvious performance improvement:
>>>>>
>>>>> w/o patch        w/ patch        changes
>>>>> 6022us            1115us            +81%
>>>>>
>>>>> Moreover, I also tested mincore() with disabling mTHP/THP, and did not
>>>>> see any obvious regression.
>>>>>
>>>>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>>>> ---
>>>>>    mm/mincore.c | 27 ++++++++++++++++++++++-----
>>>>>    1 file changed, 22 insertions(+), 5 deletions(-)
>>>>>
>>>>> diff --git a/mm/mincore.c b/mm/mincore.c
>>>>> index 832f29f46767..88be180b5550 100644
>>>>> --- a/mm/mincore.c
>>>>> +++ b/mm/mincore.c
>>>>> @@ -21,6 +21,7 @@
>>>>>    #include <linux/uaccess.h>
>>>>>    #include "swap.h"
>>>>> +#include "internal.h"
>>>>>    static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned
>>>>> long addr,
>>>>>                unsigned long end, struct mm_walk *walk)
>>>>> @@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned
>>>>> long addr, unsigned long end,
>>>>>        pte_t *ptep;
>>>>>        unsigned char *vec = walk->private;
>>>>>        int nr = (end - addr) >> PAGE_SHIFT;
>>>>> +    int step, i;
>>>>>        ptl = pmd_trans_huge_lock(pmd, vma);
>>>>>        if (ptl) {
>>>>> @@ -118,16 +120,31 @@ static int mincore_pte_range(pmd_t *pmd,
>>>>> unsigned long addr, unsigned long end,
>>>>>            walk->action = ACTION_AGAIN;
>>>>>            return 0;
>>>>>        }
>>>>> -    for (; addr != end; ptep++, addr += PAGE_SIZE) {
>>>>> +    for (; addr != end; ptep += step, addr += step * PAGE_SIZE) {
>>>>>            pte_t pte = ptep_get(ptep);
>>>>> +        step = 1;
>>>>>            /* We need to do cache lookup too for pte markers */
>>>>>            if (pte_none_mostly(pte))
>>>>>                __mincore_unmapped_range(addr, addr + PAGE_SIZE,
>>>>>                             vma, vec);
>>>>> -        else if (pte_present(pte))
>>>>> -            *vec = 1;
>>>>> -        else { /* pte is a swap entry */
>>>>> +        else if (pte_present(pte)) {
>>>>> +            if (pte_batch_hint(ptep, pte) > 1) {
>>>>> +                struct folio *folio = vm_normal_folio(vma, addr, pte);
>>>>> +
>>>>> +                if (folio && folio_test_large(folio)) {
>>>>> +                    const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
>>>>> +                                FPB_IGNORE_SOFT_DIRTY;
>>>>> +                    int max_nr = (end - addr) / PAGE_SIZE;
>>>>> +
>>>>> +                    step = folio_pte_batch(folio, addr, ptep, pte,
>>>>> +                            max_nr, fpb_flags, NULL, NULL, NULL);
>>>>> +                }
>>>>> +            }
>>>>
>>>> Can we go ahead with this along with [1], that will help us generalize
>>>> for all arches.
>>>>
>>>> [1] https://lore.kernel.org/all/20250506050056.59250-3-dev.jain@arm.com/
>>>> (Please replace PAGE_SIZE with 1)
>>>
>>> As discussed with Ryan, we don’t need to call folio_pte_batch()
>>> (something like the code below), so your patch seems unnecessarily
>>> complicated. However, David is unhappy about the open-coded
>>> pte_batch_hint().
>>
>> I can live with the below :)
>>
>> Having something more universal does maybe not make sense here. Any form of
>> patching contiguous PTEs (contiguous PFNs) -- whether with folios or not -- is
>> not required here as we really only want to
>>
>> (a) Identify pte_present() PTEs
>> (b) Avoid the cost of repeated ptep_get() with cont-pte.
> 
> Good. I will change the patch and resend it. Thanks.

Agreed. Thanks Baolin!


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ