[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20201221162851.GA22528@open-light-1.localdomain>
Date: Mon, 21 Dec 2020 11:28:53 -0500
From: Liang Li <liliang.opensource@...il.com>
To: Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
David Hildenbrand <david@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Dave Hansen <dave.hansen@...el.com>,
Michal Hocko <mhocko@...e.com>,
Liang Li <liliangleo@...iglobal.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org
Subject: [RFC v2 PATCH 2/4] mm: Add batch size for free page reporting
Use the page order as the only threshold for page reporting
is not flexible and has some flaws.
When the system's memory becomes very fragmented, there will be a
lot of low order free pages but very few high order pages, limit
the mini order as pageblock_order will prevent most of free pages
from being reclaimed by host for the case of page reclmain through
the virtio-balllon driver.
Scan a long free list is not cheap, it's better to wake up the
page reporting worker when there are more pages, wake it up for a
sigle page may not worth.
This patch add a batch size as another threshold to control the
waking up of reporting worker.
Cc: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Dave Hansen <dave.hansen@...el.com>
Cc: David Hildenbrand <david@...hat.com>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Alex Williamson <alex.williamson@...hat.com>
Cc: Michael S. Tsirkin <mst@...hat.com>
Signed-off-by: Liang Li <liliang324@...il.com>
---
mm/page_reporting.c | 2 ++
mm/page_reporting.h | 12 ++++++++++--
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/mm/page_reporting.c b/mm/page_reporting.c
index 0b22db94ce2a..2f8e3d032fab 100644
--- a/mm/page_reporting.c
+++ b/mm/page_reporting.c
@@ -14,6 +14,8 @@
#define PAGE_REPORTING_DELAY (2 * HZ)
#define MAX_SCAN_NUM 1024
+unsigned long page_report_batch_size __read_mostly = 4 * 1024 * 1024UL;
+
static struct page_reporting_dev_info __rcu *pr_dev_info __read_mostly;
enum {
diff --git a/mm/page_reporting.h b/mm/page_reporting.h
index 2c385dd4ddbd..b8fb3bbb345f 100644
--- a/mm/page_reporting.h
+++ b/mm/page_reporting.h
@@ -12,6 +12,8 @@
#define PAGE_REPORTING_MIN_ORDER pageblock_order
+extern unsigned long page_report_batch_size;
+
#ifdef CONFIG_PAGE_REPORTING
DECLARE_STATIC_KEY_FALSE(page_reporting_enabled);
void __page_reporting_notify(void);
@@ -33,6 +35,8 @@ static inline bool page_reported(struct page *page)
*/
static inline void page_reporting_notify_free(unsigned int order)
{
+ static long batch_size;
+
/* Called from hot path in __free_one_page() */
if (!static_branch_unlikely(&page_reporting_enabled))
return;
@@ -41,8 +45,12 @@ static inline void page_reporting_notify_free(unsigned int order)
if (order < PAGE_REPORTING_MIN_ORDER)
return;
- /* This will add a few cycles, but should be called infrequently */
- __page_reporting_notify();
+ batch_size += (1 << order) << PAGE_SHIFT;
+ if (batch_size >= page_report_batch_size) {
+ batch_size = 0;
+ /* This add a few cycles, but should be called infrequently */
+ __page_reporting_notify();
+ }
}
#else /* CONFIG_PAGE_REPORTING */
#define page_reported(_page) false
--
2.18.2
Powered by blists - more mailing lists