[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87a6jco2f9.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Thu, 14 Oct 2021 08:50:02 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Yang Shi <shy828301@...il.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Rik van Riel <riel@...riel.com>,
Mel Gorman <mgorman@...e.de>,
Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Zi Yan <ziy@...dia.com>, Wei Xu <weixugc@...gle.com>,
osalvador <osalvador@...e.de>,
Shakeel Butt <shakeelb@...gle.com>,
Linux MM <linux-mm@...ck.org>
Subject: Re: [PATCH -V9 1/6] NUMA Balancing: add page promotion counter
Yang Shi <shy828301@...il.com> writes:
> On Fri, Oct 8, 2021 at 1:40 AM Huang Ying <ying.huang@...el.com> wrote:
>>
>> In a system with multiple memory types, e.g. DRAM and PMEM, the CPU
>> and DRAM in one socket will be put in one NUMA node as before, while
>> the PMEM will be put in another NUMA node as described in the
>> description of the commit c221c0b0308f ("device-dax: "Hotplug"
>> persistent memory for use like normal RAM"). So, the NUMA balancing
>> mechanism will identify all PMEM accesses as remote access and try to
>> promote the PMEM pages to DRAM.
>>
>> To distinguish the number of the inter-type promoted pages from that
>> of the inter-socket migrated pages. A new vmstat count is added. The
>> counter is per-node (count in the target node). So this can be used
>> to identify promotion imbalance among the NUMA nodes.
>>
>> Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
>> Cc: Andrew Morton <akpm@...ux-foundation.org>
>> Cc: Michal Hocko <mhocko@...e.com>
>> Cc: Rik van Riel <riel@...riel.com>
>> Cc: Mel Gorman <mgorman@...e.de>
>> Cc: Peter Zijlstra <peterz@...radead.org>
>> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
>> Cc: Yang Shi <shy828301@...il.com>
>> Cc: Zi Yan <ziy@...dia.com>
>> Cc: Wei Xu <weixugc@...gle.com>
>> Cc: osalvador <osalvador@...e.de>
>> Cc: Shakeel Butt <shakeelb@...gle.com>
>> Cc: linux-kernel@...r.kernel.org
>> Cc: linux-mm@...ck.org
>> ---
>> include/linux/mmzone.h | 3 +++
>> include/linux/node.h | 5 +++++
>> include/linux/vmstat.h | 2 ++
>> mm/migrate.c | 10 ++++++++--
>> mm/vmstat.c | 3 +++
>> 5 files changed, 21 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>> index 6a1d79d84675..37ccd6158765 100644
>> --- a/include/linux/mmzone.h
>> +++ b/include/linux/mmzone.h
>> @@ -209,6 +209,9 @@ enum node_stat_item {
>> NR_PAGETABLE, /* used for pagetables */
>> #ifdef CONFIG_SWAP
>> NR_SWAPCACHE,
>> +#endif
>> +#ifdef CONFIG_NUMA_BALANCING
>> + PGPROMOTE_SUCCESS, /* promote successfully */
>> #endif
>> NR_VM_NODE_STAT_ITEMS
>> };
>> diff --git a/include/linux/node.h b/include/linux/node.h
>> index 8e5a29897936..26e96fcc66af 100644
>> --- a/include/linux/node.h
>> +++ b/include/linux/node.h
>> @@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
>>
>> #define to_node(device) container_of(device, struct node, dev)
>>
>> +static inline bool node_is_toptier(int node)
>> +{
>> + return node_state(node, N_CPU);
>> +}
>> +
>> #endif /* _LINUX_NODE_H_ */
>> diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
>> index d6a6cf53b127..75c53b7d1539 100644
>> --- a/include/linux/vmstat.h
>> +++ b/include/linux/vmstat.h
>> @@ -112,9 +112,11 @@ static inline void vm_events_fold_cpu(int cpu)
>> #ifdef CONFIG_NUMA_BALANCING
>> #define count_vm_numa_event(x) count_vm_event(x)
>> #define count_vm_numa_events(x, y) count_vm_events(x, y)
>> +#define mod_node_balancing_page_state(n, i, v) mod_node_page_state(n, i, v)
>
> I don't quite get why we need this new API. Doesn't __count_vm_events() work?
PGPROMOTE_SUCCESS is a per-node counter. That is, its type is enum
node_stat_item instead of enum vm_event_item. So we need to use
mod_node_page_state() instead of count_vm_events(). The new API is to
avoid #ifdef CONFIG_NUMA_BALANCING/#endif in caller.
Best Regards,
Huang, Ying
>> #else
>> #define count_vm_numa_event(x) do {} while (0)
>> #define count_vm_numa_events(x, y) do { (void)(y); } while (0)
>> +#define mod_node_balancing_page_state(n, i, v) do {} while (0)
>> #endif /* CONFIG_NUMA_BALANCING */
>>
>> #ifdef CONFIG_DEBUG_TLBFLUSH
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index a6a7743ee98f..c3affc587902 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -2148,6 +2148,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>> pg_data_t *pgdat = NODE_DATA(node);
>> int isolated;
>> int nr_remaining;
>> + int nr_succeeded;
>> LIST_HEAD(migratepages);
>> new_page_t *new;
>> bool compound;
>> @@ -2186,7 +2187,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>>
>> list_add(&page->lru, &migratepages);
>> nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
>> - MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
>> + MIGRATE_ASYNC, MR_NUMA_MISPLACED,
>> + &nr_succeeded);
>> if (nr_remaining) {
>> if (!list_empty(&migratepages)) {
>> list_del(&page->lru);
>> @@ -2195,8 +2197,12 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>> putback_lru_page(page);
>> }
>> isolated = 0;
>> - } else
>> + } else {
>> count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
>> + if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
>> + mod_node_balancing_page_state(
>> + NODE_DATA(node), PGPROMOTE_SUCCESS, nr_succeeded);
>> + }
>> BUG_ON(!list_empty(&migratepages));
>> return isolated;
>>
>> diff --git a/mm/vmstat.c b/mm/vmstat.c
>> index 8ce2620344b2..fff0ec94d795 100644
>> --- a/mm/vmstat.c
>> +++ b/mm/vmstat.c
>> @@ -1236,6 +1236,9 @@ const char * const vmstat_text[] = {
>> #ifdef CONFIG_SWAP
>> "nr_swapcached",
>> #endif
>> +#ifdef CONFIG_NUMA_BALANCING
>> + "pgpromote_success",
>> +#endif
>>
>> /* enum writeback_stat_item counters */
>> "nr_dirty_threshold",
>> --
>> 2.30.2
>>
Powered by blists - more mailing lists