[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201222074910.GA30051@open-light-1.localdomain>
Date: Tue, 22 Dec 2020 02:49:13 -0500
From: Liang Li <liliang.opensource@...il.com>
To: Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
David Hildenbrand <david@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Dave Hansen <dave.hansen@...el.com>,
Michal Hocko <mhocko@...e.com>,
Liang Li <liliangleo@...iglobal.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Liang Li <liliang324@...il.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, qemu-devel@...gnu.org
Subject: [RFC PATCH 3/3] mm: support free hugepage pre zero out
This patch add support of pre zero out free hugepage, we can use
this feature to speed up page population and page fault handing.
Cc: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Dave Hansen <dave.hansen@...el.com>
Cc: David Hildenbrand <david@...hat.com>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Alex Williamson <alex.williamson@...hat.com>
Cc: Michael S. Tsirkin <mst@...hat.com>
Cc: Jason Wang <jasowang@...hat.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Liang Li <liliang324@...il.com>
Signed-off-by: Liang Li <liliangleo@...iglobal.com>
---
mm/page_prezero.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/mm/page_prezero.c b/mm/page_prezero.c
index c8ce720bfc54..dff4e0adf402 100644
--- a/mm/page_prezero.c
+++ b/mm/page_prezero.c
@@ -26,6 +26,7 @@ static unsigned long delay_millisecs = 1000;
static unsigned long zeropage_enable __read_mostly;
static DEFINE_MUTEX(kzeropaged_mutex);
static struct page_reporting_dev_info zero_page_dev_info;
+static struct page_reporting_dev_info zero_hugepage_dev_info;
inline void clear_zero_page_flag(struct page *page, int order)
{
@@ -69,9 +70,17 @@ static int start_kzeropaged(void)
zero_page_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs);
err = page_reporting_register(&zero_page_dev_info);
+
+ zero_hugepage_dev_info.report = zero_free_pages;
+ zero_hugepage_dev_info.mini_order = mini_page_order;
+ zero_hugepage_dev_info.batch_size = batch_size;
+ zero_hugepage_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs);
+
+ err |= hugepage_reporting_register(&zero_hugepage_dev_info);
pr_info("Zero page enabled\n");
} else {
page_reporting_unregister(&zero_page_dev_info);
+ hugepage_reporting_unregister(&zero_hugepage_dev_info);
pr_info("Zero page disabled\n");
}
@@ -90,7 +99,15 @@ static int restart_kzeropaged(void)
zero_page_dev_info.batch_size = batch_size;
zero_page_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs);
+ hugepage_reporting_unregister(&zero_hugepage_dev_info);
+
+ zero_hugepage_dev_info.report = zero_free_pages;
+ zero_hugepage_dev_info.mini_order = mini_page_order;
+ zero_hugepage_dev_info.batch_size = batch_size;
+ zero_hugepage_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs);
+
err = page_reporting_register(&zero_page_dev_info);
+ err |= hugepage_reporting_register(&zero_hugepage_dev_info);
pr_info("Zero page enabled\n");
}
--
2.18.2
Powered by blists - more mailing lists