[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210616083745.14288-1-janghyuck.kim@samsung.com>
Date: Wed, 16 Jun 2021 17:37:41 +0900
From: Janghyuck Kim <janghyuck.kim@...sung.com>
To: unlisted-recipients:; (no To-header on input)
Cc: janghyuck.kim@...sung.com,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Palmer Dabbelt <palmerdabbelt@...gle.com>,
Atish Patra <atish.patra@....com>,
Gavin Shan <gshan@...hat.com>,
Zhengyuan Liu <liuzhengyuan@...kylinos.cn>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [PATCH 1/2] mm: support fastpath if NUMA is enabled with numa off
Architecture might support fake node when CONFIG_NUMA is enabled but any
node settings were supported by ACPI or device tree. In this case,
getting memory policy during memory allocation path is meaningless.
Moreover, performance degradation was observed in the minor page fault
test, which is provided by (https://lkml.org/lkml/2006/8/29/294).
Average faults/sec of enabling NUMA with fake node was 5~6 % worse than
disabling NUMA. To reduce this performance regression, fastpath is
introduced. fastpath can skip the memory policy checking if NUMA is
enabled but it uses fake node. If architecture doesn't support fake
node, fastpath affects nothing for memory allocation path.
Signed-off-by: Janghyuck Kim <janghyuck.kim@...sung.com>
---
mm/internal.h | 4 ++++
mm/mempolicy.c | 3 +++
2 files changed, 7 insertions(+)
diff --git a/mm/internal.h b/mm/internal.h
index 31ff935b2547..3b6c21814fbc 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -36,6 +36,10 @@ void page_writeback_init(void);
vm_fault_t do_swap_page(struct vm_fault *vmf);
+#ifndef numa_off_fastpath
+#define numa_off_fastpath() false
+#endif
+
void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
unsigned long floor, unsigned long ceiling);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index e32360e90274..21156671d941 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2152,6 +2152,9 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
int preferred_nid;
nodemask_t *nmask;
+ if (numa_off_fastpath())
+ return __alloc_pages_nodemask(gfp, order, 0, NULL);
+
pol = get_vma_policy(vma, addr);
if (pol->mode == MPOL_INTERLEAVE) {
--
2.28.0
Powered by blists - more mailing lists