lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <B8EBF52B-BC6A-4778-81AA-DDEFC9BF6157@nvidia.com>
Date:   Fri, 27 Mar 2020 21:17:00 -0400
From:   Zi Yan <ziy@...dia.com>
To:     "Kirill A. Shutemov" <kirill@...temov.name>
CC:     <akpm@...ux-foundation.org>,
        Andrea Arcangeli <aarcange@...hat.com>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH 5/7] khugepaged: Allow to collapse PTE-mapped compound
 pages

On 27 Mar 2020, at 20:39, Kirill A. Shutemov wrote:

> External email: Use caution opening links or attachments
>
>
> On Fri, Mar 27, 2020 at 02:55:06PM -0400, Zi Yan wrote:
>> On 27 Mar 2020, at 13:05, Kirill A. Shutemov wrote:
>>
>>> We can collapse PTE-mapped compound pages. We only need to avoid
>>> handling them more than once: lock/unlock page only once if it's present
>>> in the PMD range multiple times as it handled on compound level. The
>>> same goes for LRU isolation and putpack.
>>
>> s/putpack/putback/
>>
>>>
>>> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
>>> ---
>>>  mm/khugepaged.c | 41 +++++++++++++++++++++++++++++++----------
>>>  1 file changed, 31 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index b47edfe57f7b..c8c2c463095c 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -515,6 +515,17 @@ void __khugepaged_exit(struct mm_struct *mm)
>>>
>>>  static void release_pte_page(struct page *page)
>>>  {
>>> +   /*
>>> +    * We need to unlock and put compound page on LRU only once.
>>> +    * The rest of the pages have to be locked and not on LRU here.
>>> +    */
>>> +   VM_BUG_ON_PAGE(!PageCompound(page) &&
>>> +                   (!PageLocked(page) && PageLRU(page)), page);
>> It only checks base pages.
>
> That's the point.
>
>> Add
>>
>> VM_BUG_ON_PAGE(PageCompound(page) && PageLocked(page) && PageLRU(page), page);
>>
>> to check for compound pages.
>
> The compound page may be locked here if the function called for the first
> time for the page and not locked after that (becouse we've unlocked it we
> saw it the first time). The same with LRU.
>

For the first time, the compound page is locked and not on LRU, so this VM_BUG_ON passes.
For the second time and so on, the compound page is unlocked and on the LRU,
so this VM_BUG_ON still passes.

For base page, VM_BUG_ON passes.

Other unexpected situation (a compound page is locked and on LRU) triggers the VM_BU_ON,
but your VM_BUG_ON will not detect this situation, right?



>>> +
>>> +   if (!PageLocked(page))
>>> +           return;
>>> +
>>> +   page = compound_head(page);
>>>     dec_node_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page));
>>>     unlock_page(page);
>>>     putback_lru_page(page);
>>> @@ -537,6 +548,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>>>     pte_t *_pte;
>>>     int none_or_zero = 0, result = 0, referenced = 0;
>>>     bool writable = false;
>>> +   LIST_HEAD(compound_pagelist);
>>>
>>>     for (_pte = pte; _pte < pte+HPAGE_PMD_NR;
>>>          _pte++, address += PAGE_SIZE) {
>>> @@ -561,13 +573,23 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>>>                     goto out;
>>>             }
>>>
>>> -           /* TODO: teach khugepaged to collapse THP mapped with pte */
>>> +           VM_BUG_ON_PAGE(!PageAnon(page), page);
>>> +
>>>             if (PageCompound(page)) {
>>> -                   result = SCAN_PAGE_COMPOUND;
>>> -                   goto out;
>>> -           }
>>> +                   struct page *p;
>>> +                   page = compound_head(page);
>>>
>>> -           VM_BUG_ON_PAGE(!PageAnon(page), page);
>>> +                   /*
>>> +                    * Check if we have dealt with the compount page
>>
>> s/compount/compound/
>>
>
> Thanks.
>
>>> +                    * already
>>> +                    */
>>> +                   list_for_each_entry(p, &compound_pagelist, lru) {
>>> +                           if (page ==  p)
>>> +                                   break;
>>> +                   }
>>> +                   if (page ==  p)
>>> +                           continue;
>>> +           }
>>>
>>>             /*
>>>              * We can do it before isolate_lru_page because the
>>> @@ -640,6 +662,9 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>>>                 page_is_young(page) || PageReferenced(page) ||
>>>                 mmu_notifier_test_young(vma->vm_mm, address))
>>>                     referenced++;
>>> +
>>> +           if (PageCompound(page))
>>> +                   list_add_tail(&page->lru, &compound_pagelist);
>>>     }
>>>     if (likely(writable)) {
>>>             if (likely(referenced)) {
>>
>> Do we need a list here? There should be at most one compound page we will see here, right?
>
> Um? It's outside the pte loop. We get here once per PMD range.
>
> 'page' argument to trace_mm_collapse_huge_page_isolate() is misleading:
> it's just the last page handled in the loop.
>

Throughout the pte loop, we should only see at most one compound page, right?

If that is the case, we do not need a list to store that single compound page
but can use a struct page pointer that is initialized to NULL and later assigned
to the discovered compound page, right? I am not saying I want to change the code
in this way, but just try to make sure I understand the code correctly.

Thanks.

—
Best Regards,
Yan Zi

Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ