[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201113105952.11638-18-songmuchun@bytedance.com>
Date: Fri, 13 Nov 2020 18:59:48 +0800
From: Muchun Song <songmuchun@...edance.com>
To: corbet@....net, mike.kravetz@...cle.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
viro@...iv.linux.org.uk, akpm@...ux-foundation.org,
paulmck@...nel.org, mchehab+huawei@...nel.org,
pawan.kumar.gupta@...ux.intel.com, rdunlap@...radead.org,
oneukum@...e.com, anshuman.khandual@....com, jroedel@...e.de,
almasrymina@...gle.com, rientjes@...gle.com, willy@...radead.org,
osalvador@...e.de, mhocko@...e.com
Cc: duanxiongchun@...edance.com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org,
Muchun Song <songmuchun@...edance.com>
Subject: [PATCH v4 17/21] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.
Signed-off-by: Muchun Song <songmuchun@...edance.com>
---
Documentation/admin-guide/kernel-parameters.txt | 9 +++++++++
Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++
mm/hugetlb_vmemmap.c | 22 ++++++++++++++++++++++
3 files changed, 34 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 5debfe238027..ccf07293cb63 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1551,6 +1551,15 @@
Documentation/admin-guide/mm/hugetlbpage.rst.
Format: size[KMG]
+ hugetlb_free_vmemmap=
+ [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
+ this controls freeing unused vmemmap pages associated
+ with each HugeTLB page.
+ Format: { on (default) | off }
+
+ on: enable the feature
+ off: disable the feature
+
hung_task_panic=
[KNL] Should the hung task detector generate panics.
Format: 0 | 1
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index f7b1c7462991..7d6129ee97dd 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -145,6 +145,9 @@ default_hugepagesz
will all result in 256 2M huge pages being allocated. Valid default
huge page size is architecture dependent.
+hugetlb_free_vmemmap
+ When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this disables freeing
+ unused vmemmap pages associated each HugeTLB page.
When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
indicates the current number of pre-allocated huge pages of the default size.
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 47f81e0b3832..1528b156920c 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -118,6 +118,22 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd)
}
#endif
+static bool hugetlb_free_vmemmap_disabled __initdata;
+
+static int __init early_hugetlb_free_vmemmap_param(char *buf)
+{
+ if (!buf)
+ return -EINVAL;
+
+ if (!strcmp(buf, "off"))
+ hugetlb_free_vmemmap_disabled = true;
+ else if (strcmp(buf, "on"))
+ return -EINVAL;
+
+ return 0;
+}
+early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
+
static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
{
return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
@@ -505,6 +521,12 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
unsigned int order = huge_page_order(h);
unsigned int vmemmap_pages;
+ if (hugetlb_free_vmemmap_disabled) {
+ h->nr_free_vmemmap_pages = 0;
+ pr_info("disable free vmemmap pages for %s\n", h->name);
+ return;
+ }
+
vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT;
/*
* The head page and the first tail page are not to be freed to buddy
--
2.11.0
Powered by blists - more mailing lists