lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210323135405.65059-6-linmiaohe@huawei.com>
Date:   Tue, 23 Mar 2021 09:54:05 -0400
From:   Miaohe Lin <linmiaohe@...wei.com>
To:     <akpm@...ux-foundation.org>
CC:     <jglisse@...hat.com>, <shy828301@...il.com>, <aquini@...hat.com>,
        <david@...hat.com>, <apopple@...dia.com>,
        <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
        <linmiaohe@...wei.com>
Subject: [PATCH v2 5/5] mm/migrate.c: fix potential deadlock in NUMA balancing shared exec THP case

Since commit c77c5cbafe54 ("mm: migrate: skip shared exec THP for NUMA
balancing"), the NUMA balancing would skip shared exec transhuge page.
But this enhancement is not suitable for transhuge page. Because it's
required that page_mapcount() must be 1 due to no migration pte dance
is done here. On the other hand, the shared exec transhuge page will
leave the migrate_misplaced_page() with pte entry untouched and page
locked. Thus pagefault for NUMA will be triggered again and deadlock
occurs when we start waiting for the page lock held by ourselves.

Fixes: c77c5cbafe54 ("mm: migrate: skip shared exec THP for NUMA balancing")
Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
---
 mm/migrate.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 5357a8527ca2..68bfa1625898 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2192,9 +2192,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
 	int page_lru = page_is_file_lru(page);
 	unsigned long start = address & HPAGE_PMD_MASK;
 
-	if (is_shared_exec_page(vma, page))
-		goto out;
-
 	new_page = alloc_pages_node(node,
 		(GFP_TRANSHUGE_LIGHT | __GFP_THISNODE),
 		HPAGE_PMD_ORDER);
@@ -2306,7 +2303,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
 
 out_unlock:
 	unlock_page(page);
-out:
 	put_page(page);
 	return 0;
 }
-- 
2.19.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ