[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ef98debf-d82d-4ca8-bdb6-352c459bb7cf@linux.alibaba.com>
Date: Fri, 1 May 2020 10:00:24 -0700
From: Yang Shi <yang.shi@...ux.alibaba.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: kirill.shutemov@...ux.intel.com, hughd@...gle.com,
aarcange@...hat.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [v2 linux-next PATCH 2/2] mm: khugepaged: don't have to put being
freed page back to lru
On 5/1/20 12:04 AM, Kirill A. Shutemov wrote:
> On Fri, May 01, 2020 at 04:41:19AM +0800, Yang Shi wrote:
>> When khugepaged successfully isolated and copied data from old page to
>> collapsed THP, the old page is about to be freed if its last mapcount
>> is gone. So putting the page back to lru sounds not that productive in
>> this case since the page might be isolated by vmscan but it can't be
>> reclaimed by vmscan since it can't be unmapped by try_to_unmap() at all.
>>
>> Actually if khugepaged is the last user of this page so it can be freed
>> directly. So, clearing active and unevictable flags, unlocking and
>> dropping refcount from isolate instead of calling putback_lru_page().
> Any reason putback_lru_page() cannot do it internally? I mean if it is
> page_count() == 1, free the page.
Because it seems putback_lru_page() is just called when the page is
*not* about to be freed, for example compaction is aborted, migration is
failed on some pages, etc.
I checked a couple of places, i.e. compaction, migration, vmscan. All
these places free the page (i.e. old page is migrated to new page
successfully) via put_page() without putting back to lru. So, it sounds
free page in putback_lru_page() may never happen.
We could add this, but it sounds khugepaged would be the only path that
may free page in putback_lru_page(). So, it sounds less confusing to
follow other users IMHO.
>> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
>> Cc: Hugh Dickins <hughd@...gle.com>
>> Cc: Andrea Arcangeli <aarcange@...hat.com>
>> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
>> ---
>> v2: Check mapcount and skip putback lru if the last mapcount is gone
>>
>> mm/khugepaged.c | 20 ++++++++++++++------
>> 1 file changed, 14 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 0c8d30b..1fdd677 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -559,10 +559,18 @@ void __khugepaged_exit(struct mm_struct *mm)
>> static void release_pte_page(struct page *page)
>> {
>> mod_node_page_state(page_pgdat(page),
>> - NR_ISOLATED_ANON + page_is_file_lru(page),
>> - -compound_nr(page));
>> - unlock_page(page);
>> - putback_lru_page(page);
>> + NR_ISOLATED_ANON + page_is_file_lru(page), -compound_nr(page));
>> +
>> + if (total_mapcount(page)) {
>> + unlock_page(page);
>> + putback_lru_page(page);
>> + } else {
>> + ClearPageActive(page);
>> + ClearPageUnevictable(page);
>> + unlock_page(page);
>> + /* Drop refcount from isolate */
>> + put_page(page);
>> + }
>> }
>>
>> static void release_pte_pages(pte_t *pte, pte_t *_pte,
>> @@ -771,8 +779,6 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
>> } else {
>> src_page = pte_page(pteval);
>> copy_user_highpage(page, src_page, address, vma);
>> - if (!PageCompound(src_page))
>> - release_pte_page(src_page);
>> /*
>> * ptl mostly unnecessary, but preempt has to
>> * be disabled to update the per-cpu stats
>> @@ -786,6 +792,8 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
>> pte_clear(vma->vm_mm, address, _pte);
>> page_remove_rmap(src_page, false);
>> spin_unlock(ptl);
>> + if (!PageCompound(src_page))
>> + release_pte_page(src_page);
>> free_page_and_swap_cache(src_page);
>> }
>> }
>> --
>> 1.8.3.1
>>
>>
Powered by blists - more mailing lists