[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b7891211-a08a-453f-83c4-76889d6ec9fe@linux.alibaba.com>
Date: Wed, 18 Mar 2020 16:47:42 -0700
From: Yang Shi <yang.shi@...ux.alibaba.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: kirill.shutemov@...ux.intel.com, hughd@...gle.com,
aarcange@...hat.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: khugepaged: fix potential page state corruption
On 3/18/20 4:40 PM, Kirill A. Shutemov wrote:
> On Thu, Mar 19, 2020 at 07:19:42AM +0800, Yang Shi wrote:
>> When khugepaged collapses anonymous pages, the base pages would be freed
>> via pagevec or free_page_and_swap_cache(). But, the anonymous page may
>> be added back to LRU, then it might result in the below race:
>>
>> CPU A CPU B
>> khugepaged:
>> unlock page
>> putback_lru_page
>> add to lru
>> page reclaim:
>> isolate this page
>> try_to_unmap
>> page_remove_rmap <-- corrupt _mapcount
>>
>> It looks nothing would prevent the pages from isolating by reclaimer.
>>
>> The other problem is the page's active or unevictable flag might be
>> still set when freeing the page via free_page_and_swap_cache(). The
>> putback_lru_page() would not clear those two flags if the pages are
>> released via pagevec, it sounds nothing prevents from isolating active
>> or unevictable pages.
>>
>> However I didn't really run into these problems, just in theory by visual
>> inspection.
>>
>> And, it also seems unnecessary to have the pages add back to LRU again since
>> they are about to be freed when reaching this point. So, clearing active
>> and unevictable flags, unlocking and dropping refcount from isolate
>> instead of calling putback_lru_page() as what page cache collapse does.
>>
>> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
>> Cc: Hugh Dickins <hughd@...gle.com>
>> Cc: Andrea Arcangeli <aarcange@...hat.com>
>> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
>> ---
>> mm/khugepaged.c | 10 +++++++++-
>> 1 file changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index b679908..f42fa4e 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -673,7 +673,6 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
>> src_page = pte_page(pteval);
>> copy_user_highpage(page, src_page, address, vma);
>> VM_BUG_ON_PAGE(page_mapcount(src_page) != 1, src_page);
>> - release_pte_page(src_page);
>> /*
>> * ptl mostly unnecessary, but preempt has to
>> * be disabled to update the per-cpu stats
>> @@ -687,6 +686,15 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
>> pte_clear(vma->vm_mm, address, _pte);
>> page_remove_rmap(src_page, false);
>> spin_unlock(ptl);
>> +
>> + dec_node_page_state(src_page,
>> + NR_ISOLATED_ANON + page_is_file_cache(src_page));
>> + ClearPageActive(src_page);
>> + ClearPageUnevictable(src_page);
>> + unlock_page(src_page);
>> + /* Drop refcount from isolate */
>> + put_page(src_page);
>> +
>> free_page_and_swap_cache(src_page);
>> }
>> }
> I will look at this closer tomorrow, but looks like an easier fix is to
> move release_pte_page() under ptl that we take just after it.
Thanks, Kirill. However, it looks putting the page back to lru is not
necessary. And, it also sounds not very efficient to do this under ptl
IMHO since it may spins to wait for lru lock.
>
Powered by blists - more mailing lists