[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20210204101056.89336-7-ying.huang@intel.com>
Date: Thu, 4 Feb 2021 18:10:56 +0800
From: Huang Ying <ying.huang@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Huang Ying <ying.huang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>, Rik van Riel <riel@...hat.com>,
Mel Gorman <mgorman@...e.de>, Ingo Molnar <mingo@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>
Subject: [RFC -V5 6/6] memory tiering: add page promotion counter
To distinguish the number of the memory tiering promoted pages from
that of the originally inter-socket NUMA balancing migrated pages.
The counter is per-node (count in the target node). So this can be
used to identify promotion imbalance among the NUMA nodes.
Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Rik van Riel <riel@...hat.com>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org
---
include/linux/mmzone.h | 1 +
mm/migrate.c | 10 +++++++++-
mm/vmstat.c | 1 +
3 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 897331d5e57c..52c68f59f378 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -209,6 +209,7 @@ enum node_stat_item {
#endif
#ifdef CONFIG_NUMA_BALANCING
PGPROMOTE_CANDIDATE, /* candidate pages to promote */
+ PGPROMOTE_SUCCESS, /* promote successfully */
#endif
NR_VM_NODE_STAT_ITEMS
};
diff --git a/mm/migrate.c b/mm/migrate.c
index 0982919f6798..eb2130b4ecb5 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2175,8 +2175,13 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
putback_lru_page(page);
}
isolated = 0;
- } else
+ } else {
count_vm_numa_event(NUMA_PAGE_MIGRATE);
+ if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+ !node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
+ mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS,
+ nr_succeeded);
+ }
BUG_ON(!list_empty(&migratepages));
return isolated;
@@ -2301,6 +2306,9 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
mod_node_page_state(page_pgdat(page),
NR_ISOLATED_ANON + page_lru,
-HPAGE_PMD_NR);
+ if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
+ mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS,
+ HPAGE_PMD_NR);
return isolated;
out_fail:
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 0678da1db47a..3786d8773404 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1217,6 +1217,7 @@ const char * const vmstat_text[] = {
#endif
#ifdef CONFIG_NUMA_BALANCING
"pgpromote_candidate",
+ "pgpromote_success",
#endif
/* enum writeback_stat_item counters */
--
2.29.2
Powered by blists - more mailing lists