lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <6f26ce22d2fcd523418a085f2c588fe0776d46e7.1706794035.git.baolin.wang@linux.alibaba.com>
Date: Thu,  1 Feb 2024 21:31:13 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: akpm@...ux-foundation.org,
	muchun.song@...ux.dev
Cc: osalvador@...e.de,
	david@...hat.com,
	mhocko@...nel.org,
	baolin.wang@...ux.alibaba.com,
	linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: [RFC PATCH] mm: hugetlb: remove __GFP_THISNODE flag when dissolving the old hugetlb

Since commit 369fa227c219 ("mm: make alloc_contig_range handle free
hugetlb pages"), the alloc_contig_range() can handle free hugetlb pages
by allocating a new fresh hugepage, and replacing the old one in the
free hugepage pool.

However, our customers can still see the failure of alloc_contig_range()
when seeing a free hugetlb page. The reason is that, there are few memory
on the old hugetlb page's node, and it can not allocate a fresh hugetlb
page on the old hugetlb page's node in isolate_or_dissolve_huge_page() with
setting __GFP_THISNODE flag. This makes sense to some degree.

Later, the commit ae37c7ff79f1 (" mm: make alloc_contig_range handle
in-use hugetlb pages") handles the in-use hugetlb pages by isolating it
and doing migration in __alloc_contig_migrate_range(), but it can allow
fallbacking to other numa node when allocating a new hugetlb in
alloc_migration_target().

This introduces inconsistency to handling free and in-use hugetlb.
Considering the CMA allocation and memory hotplug relying on the
alloc_contig_range() are important in some scenarios, as well as keeping
the consistent hugetlb handling, we should remove the __GFP_THISNODE flag
in isolate_or_dissolve_huge_page() to allow fallbacking to other numa node,
which can solve the failure of alloc_contig_range() in our case.

Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
---
 mm/hugetlb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9d996fe4ecd9..9c832709728e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3029,7 +3029,7 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
 static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
 			struct folio *old_folio, struct list_head *list)
 {
-	gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
+	gfp_t gfp_mask = htlb_alloc_mask(h);
 	int nid = folio_nid(old_folio);
 	struct folio *new_folio;
 	int ret = 0;
@@ -3088,7 +3088,7 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
 		 * Ref count on new_folio is already zero as it was dropped
 		 * earlier.  It can be directly added to the pool free list.
 		 */
-		__prep_account_new_huge_page(h, nid);
+		__prep_account_new_huge_page(h, folio_nid(new_folio));
 		enqueue_hugetlb_folio(h, new_folio);
 
 		/*
-- 
2.39.3


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ