lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri,  2 Sep 2016 15:44:36 +0300
From:   Ebru Akagunduz <ebru.akagunduz@...il.com>
To:     linux-mm@...ck.org
Cc:     riel@...hat.com, aarcange@...hat.com, akpm@...ux-foundation.org,
        vbabka@...e.cz, mgorman@...hsingularity.net,
        kirill.shutemov@...ux.intel.com, hannes@...xchg.org,
        linux-kernel@...r.kernel.org,
        Ebru Akagunduz <ebru.akagunduz@...il.com>
Subject: [PATCH] mm, thp: fix leaking mapped pte in __collapse_huge_page_swapin()

Currently, khugepaged does not let swapin, if there is no
enough young pages in a THP. The problem is when a THP does
not have enough young page, khugepaged leaks mapped ptes.

This patch prohibits leaking mapped ptes.

Signed-off-by: Ebru Akagunduz <ebru.akagunduz@...il.com>
Suggested-by: Andrea Arcangeli <aarcange@...hat.com>
---
 mm/khugepaged.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 79c52d0..f401e9d 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -881,6 +881,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
 		.pmd = pmd,
 	};
 
+	/* we only decide to swapin, if there is enough young ptes */
+	if (referenced < HPAGE_PMD_NR/2) {
+		trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
+		return false;
+	}
 	fe.pte = pte_offset_map(pmd, address);
 	for (; fe.address < address + HPAGE_PMD_NR*PAGE_SIZE;
 			fe.pte++, fe.address += PAGE_SIZE) {
@@ -888,11 +893,6 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
 		if (!is_swap_pte(pteval))
 			continue;
 		swapped_in++;
-		/* we only decide to swapin, if there is enough young ptes */
-		if (referenced < HPAGE_PMD_NR/2) {
-			trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
-			return false;
-		}
 		ret = do_swap_page(&fe, pteval);
 
 		/* do_swap_page returns VM_FAULT_RETRY with released mmap_sem */
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ