[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1611291615400.103050@chino.kir.corp.google.com>
Date: Tue, 29 Nov 2016 16:16:15 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Mel Gorman <mgorman@...hsingularity.net>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...e.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [patch v2 1/2] mm, zone: track number of movable free pages
An upcoming compaction change will need the number of movable free pages
per zone to determine if async compaction will become unnecessarily
expensive.
This patch introduces no functional change or increased memory footprint.
It simply tracks the number of free movable pages as a subset of the
total number of free pages. This is exported to userspace as part of a
new /proc/vmstat field.
Signed-off-by: David Rientjes <rientjes@...gle.com>
---
v2: do not track free pages per migratetype since page allocator stress
testing reveals this tracking can impact workloads and there is no
substantial benefit when thp is disabled. This occurs because
entire pageblocks can be converted to new migratetypes and requires
iteration of free_areas in the hotpaths for proper tracking.
include/linux/mmzone.h | 1 +
include/linux/vmstat.h | 2 ++
mm/page_alloc.c | 8 +++++++-
mm/vmstat.c | 1 +
4 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -138,6 +138,7 @@ enum zone_stat_item {
NUMA_OTHER, /* allocation from other node */
#endif
NR_FREE_CMA_PAGES,
+ NR_FREE_MOVABLE_PAGES,
NR_VM_ZONE_STAT_ITEMS };
enum node_stat_item {
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -347,6 +347,8 @@ static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages,
__mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages);
if (is_migrate_cma(migratetype))
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
+ if (migratetype == MIGRATE_MOVABLE)
+ __mod_zone_page_state(zone, NR_FREE_MOVABLE_PAGES, nr_pages);
}
extern const char * const vmstat_text[];
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2197,6 +2197,8 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
spin_lock(&zone->lock);
for (i = 0; i < count; ++i) {
struct page *page = __rmqueue(zone, order, migratetype);
+ int mt;
+
if (unlikely(page == NULL))
break;
@@ -2217,9 +2219,13 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
else
list_add_tail(&page->lru, list);
list = &page->lru;
- if (is_migrate_cma(get_pcppage_migratetype(page)))
+ mt = get_pcppage_migratetype(page);
+ if (is_migrate_cma(mt))
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
-(1 << order));
+ if (mt == MIGRATE_MOVABLE)
+ __mod_zone_page_state(zone, NR_FREE_MOVABLE_PAGES,
+ -(1 << order));
}
__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
spin_unlock(&zone->lock);
diff --git a/mm/vmstat.c b/mm/vmstat.c
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -945,6 +945,7 @@ const char * const vmstat_text[] = {
"numa_other",
#endif
"nr_free_cma",
+ "nr_free_movable",
/* Node-based counters */
"nr_inactive_anon",
Powered by blists - more mailing lists