lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 7 Apr 2024 16:02:35 +0800
From: Muchun Song <muchun.song@...ux.dev>
To: Frank van der Linden <fvdl@...gle.com>
Cc: Linux-MM <linux-mm@...ck.org>,
 Andrew Morton <akpm@...ux-foundation.org>,
 linux-kernel@...r.kernel.org,
 Roman Gushchin <roman.gushchin@...ux.dev>
Subject: Re: [PATCH 2/2] mm/hugetlb: pass correct order_per_bit to
 cma_declare_contiguous_nid



> On Apr 5, 2024, at 00:25, Frank van der Linden <fvdl@...gle.com> wrote:
> 
> The hugetlb_cma code passes 0 in the order_per_bit argument to
> cma_declare_contiguous_nid (the alignment, computed using the
> page order, is correctly passed in).
> 
> This causes a bit in the cma allocation bitmap to always represent
> a 4k page, making the bitmaps potentially very large, and slower.
> 
> So, correctly pass in the order instead.
> 
> Signed-off-by: Frank van der Linden <fvdl@...gle.com>
> Cc: Roman Gushchin <roman.gushchin@...ux.dev>
> Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
> ---
> mm/hugetlb.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 23ef240ba48a..6dc62d8b2a3a 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -7873,9 +7873,9 @@ void __init hugetlb_cma_reserve(int order)
> 		* huge page demotion.
> 		*/
> 		res = cma_declare_contiguous_nid(0, size, 0,
> - 						PAGE_SIZE << HUGETLB_PAGE_ORDER,
> - 						0, false, name,
> - 						&hugetlb_cma[nid], nid);
> +						PAGE_SIZE << HUGETLB_PAGE_ORDER,
> +						HUGETLB_PAGE_ORDER, false, name,

IIUC, we could make the optimization further to change order_per_bit to
'MAX_PAGE_ORDER + 1' since only gigantic hugetlb pages could allocated from
the CMA pool meaning any gigantic page is greater than or equal to the
size of two to the power of 'MAX_PAGE_ORDER + 1'.

Thanks.

> +						&hugetlb_cma[nid], nid);
> 		if (res) {
> 			pr_warn("hugetlb_cma: reservation failed: err %d, node %d",
> 				res, nid);
> -- 
> 2.44.0.478.gd926399ef9-goog
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ