lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 16 Jul 2020 11:25:54 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Will Deacon <will@...nel.org>, Roman Gushchin <guro@...com>
Cc:     Barry Song <song.bao.hua@...ilicon.com>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Catalin Marinas <catalin.marinas@....com>, x86@...nel.org,
        linux-kernel@...r.kernel.org, linuxarm@...wei.com,
        linux-mm@...ck.org, Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Jonathan Cameron <jonathan.cameron@...wei.com>,
        "H.Peter Anvin" <hpa@...or.com>, Borislav Petkov <bp@...en8.de>,
        akpm@...ux-foundation.org, Mike Rapoport <rppt@...ux.ibm.com>,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v3] mm/hugetlb: split hugetlb_cma in nodes with memory

On 7/16/20 1:12 AM, Will Deacon wrote:
> On Wed, Jul 15, 2020 at 09:59:24AM -0700, Mike Kravetz wrote:
>>
>> So, everything in the existing code really depends on the hugetlb definition
>> of gigantic page (order >= MAX_ORDER).  The code to check for
>> 'order >= MAX_ORDER' in my proposed patch is just following the same
>> convention.
> 
> Fair enough, and thanks for the explanation. Maybe just chuck a comment in,
> then? Alternatively, having something like:
> 
> static inline bool page_order_is_gigantic(unsigned int order)
> {
> 	return order >= MAX_ORDER;
> }
> 
> static inline bool hstate_is_gigantic(struct hstate *h)
> {
> 	return page_order_is_gigantic(huge_page_order(h));
> }

Ok, the updated patch below includes both as well as your other suggestions.

It would be great if Roman could comment/test as he provided the initial
code and has a well defined use case.  Testing from Barry would also be
appreciated as most of the recent churn has come from his testing.

>> I think the current dependency on the hugetlb definition of gigantic page
>> may be too simplistic if using CMA for huegtlb pages becomes more common.
>> Some architectures (sparc, powerpc) have more than one gigantic pages size.
>> Currently there is no way to specify that CMA should be used for one and
>> not the other.  In addition, I could imagine someone wanting to reserve/use
>> CMA for non-gigantic (PMD) sized pages.  There is no mechainsm for that today.
>>
>> I honestly have not heard about many use cases for this CMA functionality.
>> When support was initially added, it was driven by a specific use case and
>> the 'all gigantic pages use CMA if defined' implementation was deemed
>> sufficient.  If there are more use cases, or this seems too simple we can
>> revisit that decision.
> 
> Agreed, I think your patch is an improvement regardless of that.

Anshuman and Barry have pending patches changing the place in arch specific
code where hugetlb_cma_reserve is called.  This will not be necessary if the
proposed code proves successful.

>From 17c8f37afbf42fe7412e6eebb3619c6e0b7e1c3c Mon Sep 17 00:00:00 2001
From: Mike Kravetz <mike.kravetz@...cle.com>
Date: Tue, 14 Jul 2020 15:54:46 -0700
Subject: [PATCH] hugetlb: move cma reservation to code setting up gigantic
 hstate

Instead of calling hugetlb_cma_reserve() directly from arch specific
code, call from hugetlb_add_hstate when adding a gigantic hstate.
hugetlb_add_hstate is either called from arch specific huge page setup,
or as the result of hugetlb command line processing.  In either case,
this is late enough in the init process that all numa memory information
should be initialized.  And, it is early enough to still use early
memory allocator.

Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
---
 arch/arm64/mm/init.c    | 10 ----------
 arch/x86/kernel/setup.c |  9 ---------
 include/linux/hugetlb.h | 12 ------------
 mm/hugetlb.c            | 32 ++++++++++++++++++++++++++++----
 4 files changed, 28 insertions(+), 35 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 79806732f4b4..ff0ff584dde9 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -427,16 +427,6 @@ void __init bootmem_init(void)
 	sparse_init();
 	zone_sizes_init(min, max);
 
-	/*
-	 * must be done after zone_sizes_init() which calls free_area_init()
-	 * that calls node_set_state() to initialize node_states[N_MEMORY]
-	 * because hugetlb_cma_reserve() will scan over nodes with N_MEMORY
-	 * state
-	 */
-#ifdef CONFIG_ARM64_4K_PAGES
-	hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
-#endif
-
 	memblock_dump_all();
 }
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index a1a9712090ae..111c8467fafa 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1177,15 +1177,6 @@ void __init setup_arch(char **cmdline_p)
 
 	x86_init.paging.pagetable_init();
 
-	/*
-	 * must be done after zone_sizes_init() which calls free_area_init()
-	 * that calls node_set_state() to initialize node_states[N_MEMORY]
-	 * because hugetlb_cma_reserve() will scan over nodes with N_MEMORY
-	 * state
-	 */
-	if (boot_cpu_has(X86_FEATURE_GBPAGES))
-		hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
-
 	kasan_init();
 
 	/*
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 6b9508dc31b4..5fadba1febcc 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -933,16 +933,4 @@ static inline spinlock_t *huge_pte_lock(struct hstate *h,
 	return ptl;
 }
 
-#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_CMA)
-extern void __init hugetlb_cma_reserve(int order);
-extern void __init hugetlb_cma_check(void);
-#else
-static inline __init void hugetlb_cma_reserve(int order)
-{
-}
-static inline __init void hugetlb_cma_check(void)
-{
-}
-#endif
-
 #endif /* _LINUX_HUGETLB_H */
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f24acb3af741..2ca68c5620b7 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -76,8 +76,10 @@ DEFINE_SPINLOCK(hugetlb_lock);
 static int num_fault_mutexes;
 struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp;
 
-/* Forward declaration */
+/* Forward declarations */
 static int hugetlb_acct_memory(struct hstate *h, long delta);
+static void __init hugetlb_cma_reserve(int order);
+static void __init hugetlb_cma_check(void);
 
 static inline void unlock_or_release_subpool(struct hugepage_subpool *spool)
 {
@@ -3273,6 +3275,13 @@ void __init hugetlb_add_hstate(unsigned int order)
 	snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
 					huge_page_size(h)/1024);
 
+	/*
+	 * If user wants CMA for gigantic pages, reserve it now as we
+	 * set up the gigantic page hstate.
+	 */
+	if (hstate_is_gigantic(h) && hugetlb_cma_size)
+		hugetlb_cma_reserve(order);
+
 	parsed_hstate = h;
 }
 
@@ -5627,12 +5636,16 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)
 #ifdef CONFIG_CMA
 static bool cma_reserve_called __initdata;
 
+/*
+ * early_param, as hugetlb_cma_size must be processed before any call to
+ * hugetlb_add_hstate by arch specific code or regular hugetlb command
+ * line processing.
+ */
 static int __init cmdline_parse_hugetlb_cma(char *p)
 {
 	hugetlb_cma_size = memparse(p, &p);
 	return 0;
 }
-
 early_param("hugetlb_cma", cmdline_parse_hugetlb_cma);
 
 /*
@@ -5642,11 +5655,13 @@ early_param("hugetlb_cma", cmdline_parse_hugetlb_cma);
  * hugetlb_cma_reserve() scans over N_MEMORY nodemask and hence expects the platforms
  * to have initialized N_MEMORY state.
  */
-void __init hugetlb_cma_reserve(int order)
+static void __init hugetlb_cma_reserve(int order)
 {
 	unsigned long size, reserved, per_node;
 	int nid;
 
+	if (cma_reserve_called)
+		return;
 	cma_reserve_called = true;
 
 	if (!hugetlb_cma_size)
@@ -5693,12 +5708,21 @@ void __init hugetlb_cma_reserve(int order)
 	}
 }
 
-void __init hugetlb_cma_check(void)
+static void __init hugetlb_cma_check(void)
 {
 	if (!hugetlb_cma_size || cma_reserve_called)
 		return;
 
 	pr_warn("hugetlb_cma: the option isn't supported by current arch\n");
 }
+#else
+
+static void __init hugetlb_cma_reserve(int order)
+{
+}
+
+static void __init hugetlb_cma_check(void)
+{
+}
 
 #endif /* CONFIG_CMA */
-- 
2.25.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ