lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Jun 2019 21:10:00 +0800
From:   Pingfan Liu <kernelfans@...il.com>
To:     Linux-mm@...ck.org
Cc:     Pingfan Liu <kernelfans@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ira Weiny <ira.weiny@...el.com>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        John Hubbard <jhubbard@...dia.com>,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
        Christoph Hellwig <hch@....de>,
        Keith Busch <keith.busch@...el.com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Linux-kernel@...r.kernel.org
Subject: [PATCHv4] mm/gup: speed up check_and_migrate_cma_pages() on huge page

Both hugetlb and thp locate on the same migration type of pageblock, since
they are allocated from a free_list[]. Based on this fact, it is enough to
check on a single subpage to decide the migration type of the whole huge
page. By this way, it saves (2M/4K - 1) times loop for pmd_huge on x86,
similar on other archs.

Furthermore, when executing isolate_huge_page(), it avoid taking global
hugetlb_lock many times, and meanless remove/add to the local link list
cma_page_list.

Signed-off-by: Pingfan Liu <kernelfans@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Ira Weiny <ira.weiny@...el.com>
Cc: Mike Rapoport <rppt@...ux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: John Hubbard <jhubbard@...dia.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
Cc: Christoph Hellwig <hch@....de>
Cc: Keith Busch <keith.busch@...el.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Linux-kernel@...r.kernel.org
---
v3 -> v4: fix C language precedence issue

 mm/gup.c | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index ddde097..ffca55b 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1342,19 +1342,22 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
 	LIST_HEAD(cma_page_list);
 
 check_again:
-	for (i = 0; i < nr_pages; i++) {
+	for (i = 0; i < nr_pages;) {
+
+		struct page *head = compound_head(pages[i]);
+		long step = 1;
+
+		if (PageCompound(head))
+			step = (1 << compound_order(head)) - (pages[i] - head);
 		/*
 		 * If we get a page from the CMA zone, since we are going to
 		 * be pinning these entries, we might as well move them out
 		 * of the CMA zone if possible.
 		 */
-		if (is_migrate_cma_page(pages[i])) {
-
-			struct page *head = compound_head(pages[i]);
-
-			if (PageHuge(head)) {
+		if (is_migrate_cma_page(head)) {
+			if (PageHuge(head))
 				isolate_huge_page(head, &cma_page_list);
-			} else {
+			else {
 				if (!PageLRU(head) && drain_allow) {
 					lru_add_drain_all();
 					drain_allow = false;
@@ -1369,6 +1372,8 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
 				}
 			}
 		}
+
+		i += step;
 	}
 
 	if (!list_empty(&cma_page_list)) {
-- 
2.7.5

Powered by blists - more mailing lists