lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed,  5 Jan 2022 16:47:51 -0500
From:   Zi Yan <zi.yan@...t.com>
To:     David Hildenbrand <david@...hat.com>, linux-mm@...ck.org
Cc:     linux-kernel@...r.kernel.org,
        Michael Ellerman <mpe@...erman.id.au>,
        Christoph Hellwig <hch@....de>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Robin Murphy <robin.murphy@....com>,
        linuxppc-dev@...ts.ozlabs.org,
        virtualization@...ts.linux-foundation.org,
        iommu@...ts.linux-foundation.org, Vlastimil Babka <vbabka@...e.cz>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Eric Ren <renzhengeek@...il.com>, Zi Yan <ziy@...dia.com>
Subject: [RFC PATCH v3 3/8] mm: migrate: allocate the right size of non hugetlb or THP compound pages.

From: Zi Yan <ziy@...dia.com>

alloc_migration_target() is used by alloc_contig_range() and non-LRU
movable compound pages can be migrated. Current code does not allocate the
right page size for such pages. Check THP precisely using
is_transparent_huge() and add allocation support for non-LRU compound
pages.

Signed-off-by: Zi Yan <ziy@...dia.com>
---
 mm/migrate.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index c7da064b4781..b1851ffb8576 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1546,9 +1546,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
 
 		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
 		return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask);
-	}
-
-	if (PageTransHuge(page)) {
+	} else if (is_transparent_hugepage(page)) {
 		/*
 		 * clear __GFP_RECLAIM to make the migration callback
 		 * consistent with regular THP allocations.
@@ -1556,14 +1554,19 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
 		gfp_mask &= ~__GFP_RECLAIM;
 		gfp_mask |= GFP_TRANSHUGE;
 		order = HPAGE_PMD_ORDER;
+	} else if (PageCompound(page)) {
+		/* for non-LRU movable compound pages */
+		gfp_mask |= __GFP_COMP;
+		order = compound_order(page);
 	}
+
 	zidx = zone_idx(page_zone(page));
 	if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE)
 		gfp_mask |= __GFP_HIGHMEM;
 
 	new_page = __alloc_pages(gfp_mask, order, nid, mtc->nmask);
 
-	if (new_page && PageTransHuge(new_page))
+	if (new_page && is_transparent_hugepage(page))
 		prep_transhuge_page(new_page);
 
 	return new_page;
-- 
2.34.1

Powered by blists - more mailing lists