lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4652446e-2089-a3c4-fbdb-321322887392@huawei.com>
Date:   Tue, 1 Mar 2022 14:28:13 +0800
From:   Miaohe Lin <linmiaohe@...wei.com>
To:     Huang Ying <ying.huang@...el.com>
CC:     <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        Feng Tang <feng.tang@...el.com>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        Michal Hocko <mhocko@...e.com>,
        Rik van Riel <riel@...riel.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Yang Shi <shy828301@...il.com>, Zi Yan <ziy@...dia.com>,
        Wei Xu <weixugc@...gle.com>,
        Oscar Salvador <osalvador@...e.de>,
        Shakeel Butt <shakeelb@...gle.com>,
        zhongjiang-ali <zhongjiang-ali@...ux.alibaba.com>,
        Randy Dunlap <rdunlap@...radead.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH -V13 2/3] NUMA balancing: optimize page placement for
 memory tiering system

On 2022/2/21 16:45, Huang Ying wrote:
> With the advent of various new memory types, some machines will have
> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
> memory subsystem of these machines can be called memory tiering
> system, because the performance of the different types of memory are
> usually different.
> 
> In such system, because of the memory accessing pattern changing etc,
> some pages in the slow memory may become hot globally.  So in this
> patch, the NUMA balancing mechanism is enhanced to optimize the page
> placement among the different memory types according to hot/cold
> dynamically.
> 
> In a typical memory tiering system, there are CPUs, fast memory and
> slow memory in each physical NUMA node.  The CPUs and the fast memory
> will be put in one logical node (called fast memory node), while the
> slow memory will be put in another (faked) logical node (called slow
> memory node).  That is, the fast memory is regarded as local while the
> slow memory is regarded as remote.  So it's possible for the recently
> accessed pages in the slow memory node to be promoted to the fast
> memory node via the existing NUMA balancing mechanism.
> 
> The original NUMA balancing mechanism will stop to migrate pages if
> the free memory of the target node becomes below the high watermark.
> This is a reasonable policy if there's only one memory type.  But this
> makes the original NUMA balancing mechanism almost do not work to
> optimize page placement among different memory types.  Details are as
> follows.
> 
> It's the common cases that the working-set size of the workload is
> larger than the size of the fast memory nodes.  Otherwise, it's
> unnecessary to use the slow memory at all.  So, there are almost
> always no enough free pages in the fast memory nodes, so that the
> globally hot pages in the slow memory node cannot be promoted to the
> fast memory node.  To solve the issue, we have 2 choices as follows,
> 
> a. Ignore the free pages watermark checking when promoting hot pages
>    from the slow memory node to the fast memory node.  This will
>    create some memory pressure in the fast memory node, thus trigger
>    the memory reclaiming.  So that, the cold pages in the fast memory
>    node will be demoted to the slow memory node.
> 
> b. Make kswapd of the fast memory node to reclaim pages until the free
>    pages are a little more than the high watermark (named as promo
>    watermark).  Then, if the free pages of the fast memory node reaches
>    high watermark, and some hot pages need to be promoted, kswapd of the
>    fast memory node will be waken up to demote more cold pages in the
>    fast memory node to the slow memory node.  This will free some extra
>    space in the fast memory node, so the hot pages in the slow memory
>    node can be promoted to the fast memory node.
> 
> The choice "a" may create high memory pressure in the fast memory
> node.  If the memory pressure of the workload is high, the memory
> pressure may become so high that the memory allocation latency of the
> workload is influenced, e.g. the direct reclaiming may be triggered.
> 
> The choice "b" works much better at this aspect.  If the memory
> pressure of the workload is high, the hot pages promotion will stop
> earlier because its allocation watermark is higher than that of the

Many thanks for your path. The patch looks good to me but I have a question.
WMARK_PROMO is only used inside pgdat_balanced when NUMA_BALANCING_MEMORY_TIERING
is set. So its allocation watermark seems to be as same as the normal memory
allocation. How should I understand the above sentence? Am I miss something?

Many thanks. :)

> normal memory allocation.  So in this patch, choice "b" is
> implemented.  A new zone watermark (WMARK_PROMO) is added.  Which is
> larger than the high watermark and can be controlled via
> watermark_scale_factor.
> 
> In addition to the original page placement optimization among sockets,
> the NUMA balancing mechanism is extended to be used to optimize page
> placement according to hot/cold among different memory types.  So the
> sysctl user space interface (numa_balancing) is extended in a backward
> compatible way as follow, so that the users can enable/disable these
> functionality individually.
> 
> The sysctl is converted from a Boolean value to a bits field.  The
> definition of the flags is,
> 
> - 0: NUMA_BALANCING_DISABLED
> - 1: NUMA_BALANCING_NORMAL
> - 2: NUMA_BALANCING_MEMORY_TIERING
> 
> We have tested the patch with the pmbench memory accessing benchmark
> with the 80:20 read/write ratio and the Gauss access address
> distribution on a 2 socket Intel server with Optane DC Persistent
> Memory Model.  The test results shows that the pmbench score can
> improve up to 95.9%.
> 
> Thanks Andrew Morton to help fix the document format error.
> 
> Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
> Tested-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> Reviewed-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Rik van Riel <riel@...riel.com>
> Cc: Mel Gorman <mgorman@...hsingularity.net>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
> Cc: Yang Shi <shy828301@...il.com>
> Cc: Zi Yan <ziy@...dia.com>
> Cc: Wei Xu <weixugc@...gle.com>
> Cc: Oscar Salvador <osalvador@...e.de>
> Cc: Shakeel Butt <shakeelb@...gle.com>
> Cc: zhongjiang-ali <zhongjiang-ali@...ux.alibaba.com>
> Cc: Randy Dunlap <rdunlap@...radead.org>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: linux-kernel@...r.kernel.org
> Cc: linux-mm@...ck.org
> ---
>  Documentation/admin-guide/sysctl/kernel.rst | 29 ++++++++++++++-------
>  include/linux/mmzone.h                      |  1 +
>  include/linux/sched/sysctl.h                | 10 +++++++
>  kernel/sched/core.c                         | 21 ++++++++++++---
>  kernel/sysctl.c                             |  2 +-
>  mm/migrate.c                                | 16 ++++++++++--
>  mm/page_alloc.c                             |  3 ++-
>  mm/vmscan.c                                 |  6 ++++-
>  8 files changed, 70 insertions(+), 18 deletions(-)
> 
> diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
> index d359bcfadd39..fdfd2b684822 100644
> --- a/Documentation/admin-guide/sysctl/kernel.rst
> +++ b/Documentation/admin-guide/sysctl/kernel.rst
> @@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst).
>  numa_balancing
>  ==============
>  
> -Enables/disables automatic page fault based NUMA memory
> -balancing. Memory is moved automatically to nodes
> -that access it often.
> +Enables/disables and configures automatic page fault based NUMA memory
> +balancing.  Memory is moved automatically to nodes that access it often.
> +The value to set can be the result of ORing the following:
>  
> -Enables/disables automatic NUMA memory balancing. On NUMA machines, there
> -is a performance penalty if remote memory is accessed by a CPU. When this
> -feature is enabled the kernel samples what task thread is accessing memory
> -by periodically unmapping pages and later trapping a page fault. At the
> -time of the page fault, it is determined if the data being accessed should
> -be migrated to a local memory node.
> += =================================
> +0 NUMA_BALANCING_DISABLED
> +1 NUMA_BALANCING_NORMAL
> +2 NUMA_BALANCING_MEMORY_TIERING
> += =================================
> +
> +Or NUMA_BALANCING_NORMAL to optimize page placement among different
> +NUMA nodes to reduce remote accessing.  On NUMA machines, there is a
> +performance penalty if remote memory is accessed by a CPU. When this
> +feature is enabled the kernel samples what task thread is accessing
> +memory by periodically unmapping pages and later trapping a page
> +fault. At the time of the page fault, it is determined if the data
> +being accessed should be migrated to a local memory node.
>  
>  The unmapping of pages and trapping faults incur additional overhead that
>  ideally is offset by improved memory locality but there is no universal
> @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
>  numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
>  numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
>  
> +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
> +different types of memory (represented as different NUMA nodes) to
> +place the hot pages in the fast memory.  This is implemented based on
> +unmapping and page fault too.
>  
>  numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb
>  ===============================================================================================================================
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 44bd054ca12b..06bc55db19bf 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -342,6 +342,7 @@ enum zone_watermarks {
>  	WMARK_MIN,
>  	WMARK_LOW,
>  	WMARK_HIGH,
> +	WMARK_PROMO,
>  	NR_WMARK
>  };
>  
> diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
> index c19dd5a2c05c..b5eec8854c5a 100644
> --- a/include/linux/sched/sysctl.h
> +++ b/include/linux/sched/sysctl.h
> @@ -23,6 +23,16 @@ enum sched_tunable_scaling {
>  	SCHED_TUNABLESCALING_END,
>  };
>  
> +#define NUMA_BALANCING_DISABLED		0x0
> +#define NUMA_BALANCING_NORMAL		0x1
> +#define NUMA_BALANCING_MEMORY_TIERING	0x2
> +
> +#ifdef CONFIG_NUMA_BALANCING
> +extern int sysctl_numa_balancing_mode;
> +#else
> +#define sysctl_numa_balancing_mode	0
> +#endif
> +
>  /*
>   *  control realtime throttling:
>   *
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index fcf0c180617c..c25348e9ae3a 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4280,7 +4280,9 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
>  
>  #ifdef CONFIG_NUMA_BALANCING
>  
> -void set_numabalancing_state(bool enabled)
> +int sysctl_numa_balancing_mode;
> +
> +static void __set_numabalancing_state(bool enabled)
>  {
>  	if (enabled)
>  		static_branch_enable(&sched_numa_balancing);
> @@ -4288,13 +4290,22 @@ void set_numabalancing_state(bool enabled)
>  		static_branch_disable(&sched_numa_balancing);
>  }
>  
> +void set_numabalancing_state(bool enabled)
> +{
> +	if (enabled)
> +		sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
> +	else
> +		sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
> +	__set_numabalancing_state(enabled);
> +}
> +
>  #ifdef CONFIG_PROC_SYSCTL
>  int sysctl_numa_balancing(struct ctl_table *table, int write,
>  			  void *buffer, size_t *lenp, loff_t *ppos)
>  {
>  	struct ctl_table t;
>  	int err;
> -	int state = static_branch_likely(&sched_numa_balancing);
> +	int state = sysctl_numa_balancing_mode;
>  
>  	if (write && !capable(CAP_SYS_ADMIN))
>  		return -EPERM;
> @@ -4304,8 +4315,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
>  	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
>  	if (err < 0)
>  		return err;
> -	if (write)
> -		set_numabalancing_state(state);
> +	if (write) {
> +		sysctl_numa_balancing_mode = state;
> +		__set_numabalancing_state(state);
> +	}
>  	return err;
>  }
>  #endif
> diff --git a/kernel/sysctl.c b/kernel/sysctl.c
> index 5ae443b2882e..c90a564af720 100644
> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -1689,7 +1689,7 @@ static struct ctl_table kern_table[] = {
>  		.mode		= 0644,
>  		.proc_handler	= sysctl_numa_balancing,
>  		.extra1		= SYSCTL_ZERO,
> -		.extra2		= SYSCTL_ONE,
> +		.extra2		= SYSCTL_FOUR,
>  	},
>  #endif /* CONFIG_NUMA_BALANCING */
>  	{
> diff --git a/mm/migrate.c b/mm/migrate.c
> index cdeaf01e601a..08ca9b9b142e 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -51,6 +51,7 @@
>  #include <linux/oom.h>
>  #include <linux/memory.h>
>  #include <linux/random.h>
> +#include <linux/sched/sysctl.h>
>  
>  #include <asm/tlbflush.h>
>  
> @@ -2034,16 +2035,27 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  {
>  	int page_lru;
>  	int nr_pages = thp_nr_pages(page);
> +	int order = compound_order(page);
>  
> -	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
> +	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>  
>  	/* Do not migrate THP mapped by multiple processes */
>  	if (PageTransHuge(page) && total_mapcount(page) > 1)
>  		return 0;
>  
>  	/* Avoid migrating to a node that is nearly full */
> -	if (!migrate_balanced_pgdat(pgdat, nr_pages))
> +	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
> +		int z;
> +
> +		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
> +			return 0;
> +		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
> +			if (populated_zone(pgdat->node_zones + z))
> +				break;
> +		}
> +		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
>  		return 0;
> +	}
>  
>  	if (isolate_lru_page(page))
>  		return 0;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3589febc6d31..295b8f1fc31d 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8474,7 +8474,8 @@ static void __setup_per_zone_wmarks(void)
>  
>  		zone->watermark_boost = 0;
>  		zone->_watermark[WMARK_LOW]  = min_wmark_pages(zone) + tmp;
> -		zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
> +		zone->_watermark[WMARK_HIGH] = low_wmark_pages(zone) + tmp;
> +		zone->_watermark[WMARK_PROMO] = high_wmark_pages(zone) + tmp;
>  
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 6dd8f455bb82..199b8aadbdd6 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -56,6 +56,7 @@
>  
>  #include <linux/swapops.h>
>  #include <linux/balloon_compaction.h>
> +#include <linux/sched/sysctl.h>
>  
>  #include "internal.h"
>  
> @@ -3988,7 +3989,10 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
>  		if (!managed_zone(zone))
>  			continue;
>  
> -		mark = high_wmark_pages(zone);
> +		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING)
> +			mark = wmark_pages(zone, WMARK_PROMO);
> +		else
> +			mark = high_wmark_pages(zone);
>  		if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
>  			return true;
>  	}
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ