[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1d7c1fdd-3589-da46-716f-7767eecb87a4@linux.alibaba.com>
Date: Wed, 29 Apr 2020 17:47:34 -0700
From: Yang Shi <yang.shi@...ux.alibaba.com>
To: kirill.shutemov@...ux.intel.com, hughd@...gle.com,
aarcange@...hat.com, akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [linux-next PATCH 2/2] mm: khugepaged: don't have to put being
freed page back to lru
On 4/29/20 5:41 PM, Yang Shi wrote:
>
>
> On 4/29/20 3:56 PM, Yang Shi wrote:
>> When khugepaged successfully isolated and copied data from base page to
>> collapsed THP, the base page is about to be freed. So putting the page
>> back to lru sounds not that productive since the page might be isolated
>> by vmscan but it can't be reclaimed by vmscan since it can't be unmapped
>> by try_to_unmap() at all.
>>
>> Actually khugepaged is the last user of this page so it can be freed
>> directly. So, clearing active and unevictable flags, unlocking and
>> dropping refcount from isolate instead of calling putback_lru_page().
>
> Please disregard the patch. I just remembered Kirill added collapse
> shared pages support. If the pages are shared then they have to be put
> back to lru since they may be still mapped by other processes. So we
> need check the mapcount if we would like to skip lru.
>
> And I spotted the other issue. The release_pte_page() calls
> mod_node_page_state() unconditionally, it was fine before. But, due to
> the support for collapsing shared pages we need check if the last
> mapcount is gone or not.
Hmm... this is false. I mixed up NR_ISOLATED_ANON and NR_ANON_MAPPED.
>
> Andrew, would you please remove this patch from -mm tree? I will send
> one or two rectified patches. Sorry for the inconvenience.
>
>>
>> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
>> Cc: Hugh Dickins <hughd@...gle.com>
>> Cc: Andrea Arcangeli <aarcange@...hat.com>
>> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
>> ---
>> mm/khugepaged.c | 15 +++++++++++++--
>> 1 file changed, 13 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 0c8d30b..c131a90 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -559,6 +559,17 @@ void __khugepaged_exit(struct mm_struct *mm)
>> static void release_pte_page(struct page *page)
>> {
>> mod_node_page_state(page_pgdat(page),
>> + NR_ISOLATED_ANON + page_is_file_lru(page), -compound_nr(page));
>> + ClearPageActive(page);
>> + ClearPageUnevictable(page);
>> + unlock_page(page);
>> + /* Drop refcount from isolate */
>> + put_page(page);
>> +}
>> +
>> +static void release_pte_page_to_lru(struct page *page)
>> +{
>> + mod_node_page_state(page_pgdat(page),
>> NR_ISOLATED_ANON + page_is_file_lru(page),
>> -compound_nr(page));
>> unlock_page(page);
>> @@ -576,12 +587,12 @@ static void release_pte_pages(pte_t *pte, pte_t
>> *_pte,
>> page = pte_page(pteval);
>> if (!pte_none(pteval) && !is_zero_pfn(pte_pfn(pteval)) &&
>> !PageCompound(page))
>> - release_pte_page(page);
>> + release_pte_page_to_lru(page);
>> }
>> list_for_each_entry_safe(page, tmp, compound_pagelist, lru) {
>> list_del(&page->lru);
>> - release_pte_page(page);
>> + release_pte_page_to_lru(page);
>> }
>> }
>
Powered by blists - more mailing lists