[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211020170305.376118-9-ankur.a.arora@oracle.com>
Date: Wed, 20 Oct 2021 10:02:59 -0700
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org
Cc: mingo@...nel.org, bp@...en8.de, luto@...nel.org,
akpm@...ux-foundation.org, mike.kravetz@...cle.com,
jon.grimm@....com, kvm@...r.kernel.org, konrad.wilk@...cle.com,
boris.ostrovsky@...cle.com, Ankur Arora <ankur.a.arora@...cle.com>
Subject: [PATCH v2 08/14] mm/clear_page: add clear_page_uncached_threshold()
Introduce clear_page_uncached_threshold which provides the threshold
above which clear_page_uncached() is used.
The ideal threshold value depends on the CPU architecture and where
the performance curves for cached and uncached stores intersect.
Typically this would depend on microarchitectural details and the LLC
size.
Here, we choose a 8MB (CLEAR_PAGE_UNCACHED_THRESHOLD) which seems
like a reasonably sized LLC.
Also define clear_page_prefer_uncached() which provides the user
interface to query this.
Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
---
include/linux/mm.h | 18 ++++++++++++++++++
mm/memory.c | 30 ++++++++++++++++++++++++++++++
2 files changed, 48 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b88069d1116c..49a97f817eb2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3190,6 +3190,24 @@ static inline bool vma_is_special_huge(const struct vm_area_struct *vma)
(vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)));
}
+/*
+ * Default size beyond which huge page clearing uses the uncached
+ * path. We size it for a reasonably sized LLC.
+ */
+#define CLEAR_PAGE_UNCACHED_THRESHOLD (8 << 20)
+
+/*
+ * Arch specific code can define arch_clear_page_uncached_threshold()
+ * to override CLEAR_PAGE_UNCACHED_THRESHOLD with a machine specific value.
+ */
+extern unsigned long __init arch_clear_page_uncached_threshold(void);
+
+extern bool clear_page_prefer_uncached(unsigned long extent);
+#else
+static inline bool clear_page_prefer_uncached(unsigned long extent)
+{
+ return false;
+}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */
#ifndef clear_user_page_uncached
diff --git a/mm/memory.c b/mm/memory.c
index adf9b9ef8277..9f6059520985 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5266,6 +5266,36 @@ EXPORT_SYMBOL(__might_fault);
#endif
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
+
+static unsigned long __read_mostly clear_page_uncached_threshold =
+ CLEAR_PAGE_UNCACHED_THRESHOLD;
+
+/* Arch code can override for a machine specific value. */
+unsigned long __weak __init arch_clear_page_uncached_threshold(void)
+{
+ return CLEAR_PAGE_UNCACHED_THRESHOLD;
+}
+
+static int __init setup_clear_page_uncached_threshold(void)
+{
+ clear_page_uncached_threshold =
+ arch_clear_page_uncached_threshold() / PAGE_SIZE;
+ return 0;
+}
+
+/*
+ * cacheinfo is setup via device_initcall and we want to get set after
+ * that. Use the default value until then.
+ */
+late_initcall(setup_clear_page_uncached_threshold);
+
+bool clear_page_prefer_uncached(unsigned long extent)
+{
+ unsigned long pages = extent / PAGE_SIZE;
+
+ return pages >= clear_page_uncached_threshold;
+}
+
/*
* Process all subpages of the specified huge page with the specified
* operation. The target subpage will be processed last to keep its
--
2.29.2
Powered by blists - more mailing lists