lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b4e303ab-9692-8fd2-fa5f-1b07248d07b6@linux.alibaba.com>
Date:   Thu, 5 Dec 2019 14:54:47 +0800
From:   王贇 <yun.wang@...ux.alibaba.com>
To:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Luis Chamberlain <mcgrof@...nel.org>,
        Kees Cook <keescook@...omium.org>,
        Iurii Zaikin <yzaikin@...gle.com>,
        Michal Koutný <mkoutny@...e.com>,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-doc@...r.kernel.org,
        "Paul E. McKenney" <paulmck@...ux.ibm.com>,
        Randy Dunlap <rdunlap@...radead.org>,
        Jonathan Corbet <corbet@....net>
Subject: [PATCH v5 2/2] sched/numa: documentation for per-cgroup numa,
 statistics

Add the description for 'numa_locality', also a new doc to explain
the details on how to deal with the per-cgroup numa statistics.

Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Michal Koutný <mkoutny@...e.com>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Jonathan Corbet <corbet@....net>
Cc: Iurii Zaikin <yzaikin@...gle.com>
Cc: Randy Dunlap <rdunlap@...radead.org>
Signed-off-by: Michael Wang <yun.wang@...ux.alibaba.com>
---
 Documentation/admin-guide/cg-numa-stat.rst      | 178 ++++++++++++++++++++++++
 Documentation/admin-guide/index.rst             |   1 +
 Documentation/admin-guide/kernel-parameters.txt |   4 +
 Documentation/admin-guide/sysctl/kernel.rst     |   9 ++
 init/Kconfig                                    |   2 +
 5 files changed, 194 insertions(+)
 create mode 100644 Documentation/admin-guide/cg-numa-stat.rst

diff --git a/Documentation/admin-guide/cg-numa-stat.rst b/Documentation/admin-guide/cg-numa-stat.rst
new file mode 100644
index 000000000000..30ebe5d6404f
--- /dev/null
+++ b/Documentation/admin-guide/cg-numa-stat.rst
@@ -0,0 +1,178 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============================
+Per-cgroup NUMA statistics
+===============================
+
+Background
+----------
+
+On NUMA platforms, remote memory accessing always has a performance penalty.
+Although we have NUMA balancing working hard to maximize the access locality,
+there are still situations it can't help.
+
+This could happen in modern production environment. When a large number of
+cgroups are used to classify and control resources, this creates a complex
+configuration for memory policy, CPUs and NUMA nodes. In such cases NUMA
+balancing could end up with the wrong memory policy or exhausted local NUMA
+node, which would lead to low percentage of local page accesses.
+
+We need to detect such cases, figure out which workloads from which cgroup
+have introduced the issues, then we get chance to do adjustment to avoid
+performance degradation.
+
+However, there are no hardware counters for per-task local/remote accessing
+info, we don't know how many remote page accesses have occurred for a
+particular task.
+
+NUMA Locality
+-------------
+
+Fortunately, we have NUMA Balancing which scans task's mapping and triggers
+page fault periodically, giving us the opportunity to record per-task page
+accessing info, when the CPU fall into PF is from the same node of pages, we
+consider task as doing local page accessing, otherwise the remote page
+accessing, we call these two counter the locality info.
+
+On each tick, we acquire the locality info of current task on that CPU, update
+the increments into its cgroup, becoming the group locality info.
+
+By "echo 1 > /proc/sys/kernel/numa_locality" at runtime or adding boot parameter
+'numa_locality', we will enable the accounting of per-cgroup NUMA locality info,
+the 'cpu.numa_stat' entry of CPU cgroup will show statistics::
+
+  page_access local=NR_LOCAL_PAGE_ACCESS remote=NR_REMOTE_PAGE_ACCESS
+
+We define 'NUMA locality' as::
+
+  NR_LOCAL_PAGE_ACCESS * 100 / (NR_LOCAL_PAGE_ACCESS + NR_REMOTE_PAGE_ACCESS)
+
+This per-cgroup percentage number helps to represent the NUMA Balancing behavior.
+
+Note that the accounting is hierarchical, which means the NUMA locality info for
+a given group represent not only the workload of this group, but also the
+workloads of all its descendants.
+
+For example the 'cpu.numa_stat' shows::
+
+  page_access local=129909383 remote=18265810
+
+The NUMA locality calculated as::
+
+  129909383 * 100 / (129909383 + 18265810) = 87.67
+
+Thus we know the workload of this group and its descendants have totally done
+129909383 times of local page accessing and 18265810 times of remotes, locality
+is 87.67% which imply most of the memory access are local.
+
+NUMA Consumption
+----------------
+
+There are also other cgroup entry help us to estimate NUMA efficiency, which is
+'cpuacct.usage_percpu' and 'memory.numa_stat'.
+
+By reading 'cpuacct.usage_percpu' we will get per-cpu runtime (in nanoseconds)
+info (in hierarchy) as::
+
+  CPU_0_RUNTIME CPU_1_RUNTIME CPU_2_RUNTIME ... CPU_X_RUNTIME
+
+Combined with the info from::
+
+  cat /sys/devices/system/node/nodeX/cpulist
+
+We would be able to accumulate the runtime of CPUs into NUMA nodes, to get the
+per-cgroup node runtime info.
+
+By reading 'memory.numa_stat' we will get per-cgroup node memory consumption
+info as::
+
+  total=TOTAL_MEM N0=MEM_ON_NODE0 N1=MEM_ON_NODE1 ... NX=MEM_ON_NODEX
+
+Together we call these the per-cgroup NUMA consumption info, tell us how many
+resources a particular workload has consumed, on a particular NUMA node.
+
+Monitoring
+----------
+
+By monitoring the increments of locality info, we can easily know whether NUMA
+Balancing is working well for a particular workload.
+
+For example we take a 5 seconds sample period, then on each sampling we have::
+
+  local_diff = last_nr_local_page_access - nr_local_page_access
+  remote_diff = last_nr_remote_page_access - nr_remote_page_access
+
+and we get the locality in this period as::
+
+  locality = local_diff * 100 / (local_diff + remote_diff)
+
+We can plot a line for locality, when the line close to 100% things are good,
+when getting close to 0% something is wrong, we can pick a proper watermark to
+trigger warning message.
+
+You may want to drop the data if the local/remote_diff is too small, which
+implies there are not many available pages for NUMA Balancing to scan, ignoring
+would be fine since most likely the workload is insensitive to NUMA, or the
+memory topology is already good enough.
+
+Monitoring root group helps you control the overall situation, while you may
+also want to monitor all the leaf groups which contain the workloads, this
+helps to catch the mouse.
+
+Try to put your workload into also the cpuacct & memory cgroup, when NUMA
+Balancing is disabled or locality becomes too small, we may want to monitor
+the per-node runtime & memory info to see if the node consumption meet the
+requirements.
+
+For NUMA node X on each sampling we have::
+
+  runtime_X_diff = runtime_X - last_runtime_X
+  runtime_all_diff = runtime_all - last_runtime_all
+
+  runtime_percent_X = runtime_X_diff * 100 / runtime_all_diff
+  memory_percent_X = memory_X * 100 / memory_all
+
+These two percentages are usually matched on each node, workload should execute
+mostly on the node that contains most of its memory, but it's not guaranteed.
+
+The workload may only access a small part of its memory, in such cases although
+the majority of memory are remotely, locality could still be good.
+
+Thus to tell if things are fine or not depends on the understanding of system
+resource deployment, however, if you find node X got 100% memory percent but 0%
+runtime percent, definitely something is wrong.
+
+Troubleshooting
+---------------
+
+After identifying which workload introduced the bad locality, check:
+
+1). Is the workload bound to a particular NUMA node?
+2). Has any NUMA node run out of resources?
+
+There are several ways to bind task's memory with a NUMA node, the strict way
+like the MPOL_BIND memory policy or 'cpuset.mems' will limit the memory
+node where to allocate pages. In this situation, admin should make sure the
+task is allowed to run on the CPUs of that NUMA node, and make sure there are
+available CPU resource there.
+
+There are also ways to bind task's CPU with a NUMA node, like 'cpuset.cpus' or
+sched_setaffinity() syscall. In this situation, NUMA Balancing help to migrate
+pages into that node, admin should make sure there are available memory there.
+
+Admin could try to rebind or unbind the NUMA node to erase the damage, make a
+change then observe the statistics to see if things get better until the
+situation is acceptable.
+
+Highlights
+----------
+
+For some tasks, NUMA Balancing may be found to be unnecessary to scan pages,
+and locality could always be 0 or small number, don't pay attention to them
+since they most likely insensitive to NUMA.
+
+There is no accounting until the option is turned on, so enable it in advance
+if you want to have the whole history.
+
+We have per-task migfailed counter to tell how many page migration has been
+failed for a particular task, you will find it in /proc/PID/sched entry.
diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst
index 4405b7485312..c75a3fdfcd94 100644
--- a/Documentation/admin-guide/index.rst
+++ b/Documentation/admin-guide/index.rst
@@ -112,6 +112,7 @@ configure specific aspects of kernel behavior to your liking.
    video-output
    wimax/index
    xfs
+   cg-numa-stat

 .. only::  subproject and html

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 0945611b3877..9d9e57d19af3 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3227,6 +3227,10 @@
 	numa_balancing=	[KNL,X86] Enable or disable automatic NUMA balancing.
 			Allowed values are enable and disable

+	numa_locality	[KNL] Enable per-cgroup numa locality info.
+			Useful to debug NUMA efficiency problems when there are
+			lots of per-cgroup workloads.
+
 	numa_zonelist_order= [KNL, BOOT] Select zonelist order for NUMA.
 			'node', 'default' can be specified
 			This can be set from sysctl after boot.
diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index 7e203b3ed331..efa995e757fd 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -572,6 +572,15 @@ rate for each task.
 numa_balancing_scan_size_mb is how many megabytes worth of pages are
 scanned for a given scan.

+numa_locality:
+=============
+
+Enables/disables per-cgroup NUMA locality info.
+
+0: disabled (default).
+1: enabled.
+
+Check Documentation/admin-guide/cg-numa-stat.rst for details.

 osrelease, ostype & version:
 ============================
diff --git a/init/Kconfig b/init/Kconfig
index c614ba6bdcc2..3538fdd73387 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -825,6 +825,8 @@ config CGROUP_NUMA_LOCALITY
 	  This option enables the collection of per-cgroup NUMA locality info,
 	  to tell whether NUMA Balancing is working well for a particular
 	  workload, also imply the NUMA efficiency.
+	  See
+		-  Documentation/admin-guide/cg-numa-stat.rst

 menuconfig CGROUPS
 	bool "Control Group support"
-- 
2.14.4.44.g2045bb6

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ