lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a084d29e-8027-4d20-a1b4-584bdbbe111d@arm.com>
Date: Mon, 14 Apr 2025 14:21:27 +0530
From: Dev Jain <dev.jain@....com>
To: Ryan Roberts <ryan.roberts@....com>, Lance Yang <ioworker0@...il.com>,
 Xavier <xavier_qy@....com>
Cc: Barry Song <21cnbao@...il.com>, catalin.marinas@....com, will@...nel.org,
 akpm@...ux-foundation.org, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org, David Hildenbrand <david@...hat.com>,
 Matthew Wilcox <willy@...radead.org>, Zi Yan <ziy@...dia.com>
Subject: Re: [PATCH v1] mm/contpte: Optimize loop to reduce redundant
 operations



On 14/04/25 1:36 pm, Ryan Roberts wrote:
> On 12/04/2025 06:05, Lance Yang wrote:
>> On Sat, Apr 12, 2025 at 1:30 AM Dev Jain <dev.jain@....com> wrote:
>>>
>>> +others
>>>
>>> On 11/04/25 2:55 am, Barry Song wrote:
>>>> On Mon, Apr 7, 2025 at 9:23 PM Xavier <xavier_qy@....com> wrote:
>>>>>
>>>>> This commit optimizes the contpte_ptep_get function by adding early
>>>>>    termination logic. It checks if the dirty and young bits of orig_pte
>>>>>    are already set and skips redundant bit-setting operations during
>>>>>    the loop. This reduces unnecessary iterations and improves performance.
>>>>>
>>>>> Signed-off-by: Xavier <xavier_qy@....com>
>>>>> ---
>>>>>    arch/arm64/mm/contpte.c | 13 +++++++++++--
>>>>>    1 file changed, 11 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
>>>>> index bcac4f55f9c1..ca15d8f52d14 100644
>>>>> --- a/arch/arm64/mm/contpte.c
>>>>> +++ b/arch/arm64/mm/contpte.c
>>>>> @@ -163,17 +163,26 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
>>>>>
>>>>>           pte_t pte;
>>>>>           int i;
>>>>> +       bool dirty = false;
>>>>> +       bool young = false;
>>>>>
>>>>>           ptep = contpte_align_down(ptep);
>>>>>
>>>>>           for (i = 0; i < CONT_PTES; i++, ptep++) {
>>>>>                   pte = __ptep_get(ptep);
>>>>>
>>>>> -               if (pte_dirty(pte))
>>>>> +               if (!dirty && pte_dirty(pte)) {
>>>>> +                       dirty = true;
>>>>>                           orig_pte = pte_mkdirty(orig_pte);
>>>>> +               }
>>>>>
>>>>> -               if (pte_young(pte))
>>>>> +               if (!young && pte_young(pte)) {
>>>>> +                       young = true;
>>>>>                           orig_pte = pte_mkyoung(orig_pte);
>>>>> +               }
>>>>> +
>>>>> +               if (dirty && young)
>>>>> +                       break;
>>>>
>>>> This kind of optimization is always tricky. Dev previously tried a similar
>>>> approach to reduce the loop count, but it ended up causing performance
>>>> degradation:
>>>> https://lore.kernel.org/linux-mm/20240913091902.1160520-1-dev.jain@arm.com/
>>>>
>>>> So we may need actual data to validate this idea.
>>>
>>> The original v2 patch does not work, I changed it to the following:
>>>
>>> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
>>> index bcac4f55f9c1..db0ad38601db 100644
>>> --- a/arch/arm64/mm/contpte.c
>>> +++ b/arch/arm64/mm/contpte.c
>>> @@ -152,6 +152,16 @@ void __contpte_try_unfold(struct mm_struct *mm,
>>> unsigned long addr,
>>>    }
>>>    EXPORT_SYMBOL_GPL(__contpte_try_unfold);
>>>
>>> +#define CHECK_CONTPTE_FLAG(start, ptep, orig_pte, flag) \
>>> +       int _start; \
>>> +       pte_t *_ptep = ptep; \
>>> +       for (_start = start; _start < CONT_PTES; _start++, ptep++) { \
>>> +               if (pte_##flag(__ptep_get(_ptep))) { \
>>> +                       orig_pte = pte_mk##flag(orig_pte); \
>>> +                       break; \
>>> +               } \
>>> +       }
>>> +
>>>    pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
>>>    {
>>>           /*
>>> @@ -169,11 +179,17 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
>>>           for (i = 0; i < CONT_PTES; i++, ptep++) {
>>>                   pte = __ptep_get(ptep);
>>>
>>> -               if (pte_dirty(pte))
>>> +               if (pte_dirty(pte)) {
>>>                           orig_pte = pte_mkdirty(orig_pte);
>>> +                       CHECK_CONTPTE_FLAG(i, ptep, orig_pte, young);
>>> +                       break;
>>> +               }
>>>
>>> -               if (pte_young(pte))
>>> +               if (pte_young(pte)) {
>>>                           orig_pte = pte_mkyoung(orig_pte);
>>> +                       CHECK_CONTPTE_FLAG(i, ptep, orig_pte, dirty);
>>> +                       break;
>>> +               }
>>>           }
>>>
>>>           return orig_pte;
>>>
>>> Some rudimentary testing with micromm reveals that this may be
>>> *slightly* faster. I cannot say for sure yet.
>>
>> Yep, this change works as expected, IIUC.
>>
>> However, I'm still wondering if the added complexity is worth it for
>> such a slight/negligible performance gain. That said, if we have
>> solid numbers/data to back it up, all doubts would disappear ;)
> 
> I agree with Barry; we need clear performance improvement numbers to consider
> this type of optimization. I doubt there will be measurable improvement for 4K
> and 64K base pages (because the the number of PTEs in a contpte block are 16 and
> 32 respectively). But 16K base pages may benefit given there are 128 PTEs in a
> contpte block for that case.
> 
> Also FWIW, I'm struggling to understand CHECK_CONTPTE_FLAG(); Perhaps something
> like this would suffice?

The idea of CHECK_CONTPTE_FLAG() is to check only for the other flag 
until it's found; the diff below will still keep checking for both until 
both are found.

> 
> ----8<----
> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> index 55107d27d3f8..7787b116b339 100644
> --- a/arch/arm64/mm/contpte.c
> +++ b/arch/arm64/mm/contpte.c
> @@ -169,11 +169,17 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte)
>          for (i = 0; i < CONT_PTES; i++, ptep++) {
>                  pte = __ptep_get(ptep);
> 
> -               if (pte_dirty(pte))
> +               if (pte_dirty(pte)) {
>                          orig_pte = pte_mkdirty(orig_pte);
> +                       if (pte_young(orig_pte))
> +                               break;
> +               }
> 
> -               if (pte_young(pte))
> +               if (pte_young(pte)) {
>                          orig_pte = pte_mkyoung(orig_pte);
> +                       if (pte_dirty(orig_pte))
> +                               break;
> +               }
>          }
> 
>          return orig_pte;
> ----8<----
> 
> Thanks,
> Ryan
> 
> 
>>
>> Thanks,
>> Lance
>>
>>>
>>>>
>>>>>           }
>>>>>
>>>>>           return orig_pte;
>>>>> --
>>>>> 2.34.1
>>>>>
>>>>
>>>> Thanks
>>>> Barry
>>>>
>>>
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ