lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <db660bef-c927-b793-7a79-a88df197a756@linux.alibaba.com>
Date:   Wed, 18 Mar 2020 22:39:21 -0700
From:   Yang Shi <yang.shi@...ux.alibaba.com>
To:     "Kirill A. Shutemov" <kirill@...temov.name>
Cc:     kirill.shutemov@...ux.intel.com, hughd@...gle.com,
        aarcange@...hat.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: khugepaged: fix potential page state corruption



On 3/18/20 5:55 PM, Yang Shi wrote:
>
>
> On 3/18/20 5:12 PM, Kirill A. Shutemov wrote:
>> On Thu, Mar 19, 2020 at 07:19:42AM +0800, Yang Shi wrote:
>>> When khugepaged collapses anonymous pages, the base pages would be 
>>> freed
>>> via pagevec or free_page_and_swap_cache().  But, the anonymous page may
>>> be added back to LRU, then it might result in the below race:
>>>
>>>     CPU A                CPU B
>>> khugepaged:
>>>    unlock page
>>>    putback_lru_page
>>>      add to lru
>>>                 page reclaim:
>>>                   isolate this page
>>>                   try_to_unmap
>>>    page_remove_rmap <-- corrupt _mapcount
>>>
>>> It looks nothing would prevent the pages from isolating by reclaimer.
>> Hm. Why should it?
>>
>> try_to_unmap() doesn't exclude parallel page unmapping. _mapcount is
>> protected by ptl. And this particular _mapcount pin is reachable for
>> reclaim as it's not part of usual page table tree. Basically
>> try_to_unmap() will never succeeds until we give up the _mapcount on
>> khugepaged side.
>
> I don't quite get. What does "not part of usual page table tree" means?
>
> How's about try_to_unmap() acquires ptl before khugepaged?
>
>>
>> I don't see the issue right away.
>>
>>> The other problem is the page's active or unevictable flag might be
>>> still set when freeing the page via free_page_and_swap_cache().
>> So what?
>
> The flags may leak to page free path then kernel may complain if 
> DEBUG_VM is set.
>
>>
>>> The putback_lru_page() would not clear those two flags if the pages are
>>> released via pagevec, it sounds nothing prevents from isolating active

Sorry, this is a typo. If the page is freed via pagevec, active and 
unevictable flag would get cleared before freeing by page_off_lru().

But, if the page is freed by free_page_and_swap_cache(), these two flags 
are not cleared. But, it seems this path is hit rare, the pages are 
freed by pagevec for the most cases.

>>> or unevictable pages.
>> Again, why should it? vmscan is equipped to deal with this.
>
> I don't mean vmscan, I mean khugepaged may isolate active and 
> unevictable pages since it just simply walks page table.
>
>>
>>> However I didn't really run into these problems, just in theory by 
>>> visual
>>> inspection.
>>>
>>> And, it also seems unnecessary to have the pages add back to LRU 
>>> again since
>>> they are about to be freed when reaching this point.  So, clearing 
>>> active
>>> and unevictable flags, unlocking and dropping refcount from isolate
>>> instead of calling putback_lru_page() as what page cache collapse does.
>> Hm? But we do call putback_lru_page() on the way out. I do not follow.
>
> It just calls putback_lru_page() at error path, not success path. 
> Putting pages back to lru on error path definitely makes sense. Here 
> it is the success path.
>
>>
>>> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
>>> Cc: Hugh Dickins <hughd@...gle.com>
>>> Cc: Andrea Arcangeli <aarcange@...hat.com>
>>> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
>>> ---
>>>   mm/khugepaged.c | 10 +++++++++-
>>>   1 file changed, 9 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index b679908..f42fa4e 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -673,7 +673,6 @@ static void __collapse_huge_page_copy(pte_t 
>>> *pte, struct page *page,
>>>               src_page = pte_page(pteval);
>>>               copy_user_highpage(page, src_page, address, vma);
>>>               VM_BUG_ON_PAGE(page_mapcount(src_page) != 1, src_page);
>>> -            release_pte_page(src_page);
>>>               /*
>>>                * ptl mostly unnecessary, but preempt has to
>>>                * be disabled to update the per-cpu stats
>>> @@ -687,6 +686,15 @@ static void __collapse_huge_page_copy(pte_t 
>>> *pte, struct page *page,
>>>               pte_clear(vma->vm_mm, address, _pte);
>>>               page_remove_rmap(src_page, false);
>>>               spin_unlock(ptl);
>>> +
>>> +            dec_node_page_state(src_page,
>>> +                NR_ISOLATED_ANON + page_is_file_cache(src_page));
>>> +            ClearPageActive(src_page);
>>> +            ClearPageUnevictable(src_page);
>>> +            unlock_page(src_page);
>>> +            /* Drop refcount from isolate */
>>> +            put_page(src_page);
>>> +
>>>               free_page_and_swap_cache(src_page);
>>>           }
>>>       }
>>> -- 
>>> 1.8.3.1
>>>
>>>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ