lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri,  1 May 2020 04:41:19 +0800
From:   Yang Shi <yang.shi@...ux.alibaba.com>
To:     kirill.shutemov@...ux.intel.com, hughd@...gle.com,
        aarcange@...hat.com, akpm@...ux-foundation.org
Cc:     yang.shi@...ux.alibaba.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: [v2 linux-next PATCH 2/2] mm: khugepaged: don't have to put being freed page back to lru

When khugepaged successfully isolated and copied data from old page to
collapsed THP, the old page is about to be freed if its last mapcount
is gone.  So putting the page back to lru sounds not that productive in
this case since the page might be isolated by vmscan but it can't be
reclaimed by vmscan since it can't be unmapped by try_to_unmap() at all.

Actually if khugepaged is the last user of this page so it can be freed
directly.  So, clearing active and unevictable flags, unlocking and
dropping refcount from isolate instead of calling putback_lru_page().

Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: Andrea Arcangeli <aarcange@...hat.com>
Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
---
v2: Check mapcount and skip putback lru if the last mapcount is gone

 mm/khugepaged.c | 20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 0c8d30b..1fdd677 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -559,10 +559,18 @@ void __khugepaged_exit(struct mm_struct *mm)
 static void release_pte_page(struct page *page)
 {
 	mod_node_page_state(page_pgdat(page),
-			NR_ISOLATED_ANON + page_is_file_lru(page),
-			-compound_nr(page));
-	unlock_page(page);
-	putback_lru_page(page);
+		NR_ISOLATED_ANON + page_is_file_lru(page), -compound_nr(page));
+
+	if (total_mapcount(page)) {
+		unlock_page(page);
+		putback_lru_page(page);
+	} else {
+		ClearPageActive(page);
+		ClearPageUnevictable(page);
+		unlock_page(page);
+		/* Drop refcount from isolate */
+		put_page(page);
+	}
 }
 
 static void release_pte_pages(pte_t *pte, pte_t *_pte,
@@ -771,8 +779,6 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
 		} else {
 			src_page = pte_page(pteval);
 			copy_user_highpage(page, src_page, address, vma);
-			if (!PageCompound(src_page))
-				release_pte_page(src_page);
 			/*
 			 * ptl mostly unnecessary, but preempt has to
 			 * be disabled to update the per-cpu stats
@@ -786,6 +792,8 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
 			pte_clear(vma->vm_mm, address, _pte);
 			page_remove_rmap(src_page, false);
 			spin_unlock(ptl);
+			if (!PageCompound(src_page))
+				release_pte_page(src_page);
 			free_page_and_swap_cache(src_page);
 		}
 	}
-- 
1.8.3.1

Powered by blists - more mailing lists