lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 27 Oct 2022 10:53:00 +0800
From:   Gang Li <ligang.bdlg@...edance.com>
To:     unlisted-recipients:; (no To-header on input)
Cc:     Gang Li <ligang.bdlg@...edance.com>, linux-api@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: [PATCH v5 0/2] sched/numa: add per-process numa_balancing

# Introduce
Add PR_NUMA_BALANCING in prctl.

A large number of page faults will cause performance loss when numa
balancing is performing. Thus those processes which care about worst-case
performance need numa balancing disabled. Others, on the contrary, allow a
temporary performance loss in exchange for higher average performance, so
enable numa balancing is better for them.

Numa balancing can only be controlled globally by
/proc/sys/kernel/numa_balancing. Due to the above case, we want to
disable/enable numa_balancing per-process instead.

Set per-process numa balancing:
	prctl(PR_NUMA_BALANCING, PR_SET_NUMA_BALANCING_DISABLE); //disable
	prctl(PR_NUMA_BALANCING, PR_SET_NUMA_BALANCING_ENABLE);  //enable
	prctl(PR_NUMA_BALANCING, PR_SET_NUMA_BALANCING_DEFAULT); //follow global
Get numa_balancing state:
	prctl(PR_NUMA_BALANCING, PR_GET_NUMA_BALANCING, &ret);
	cat /proc/<pid>/status | grep NumaB_mode

# Unixbench multithread result
I ran benchmark 20 times, but still have measurement error. I will run
benchmark more precisely on the next version of this patchset.
+-------------------+----------+
|       NAME        | OVERHEAD |
+-------------------+----------+
| Dhrystone2        | -0.27%   |
| Whetstone         | -0.17%   |
| Execl             | -0.92%   |
| File_Copy_1024    | 0.31%    |
| File_Copy_256     | -1.96%   |
| File_Copy_4096    | 0.40%    |
| Pipe_Throughput   | -3.08%   |
| Context_Switching | -1.11%   |
| Process_Creation  | 3.24%    |
| Shell_Scripts_1   | 0.26%    |
| Shell_Scripts_8   | 0.32%    |
| System_Call       | 0.10%    |
+-------------------+----------+
| Total             | -0.21%   |
+-------------------+----------+

# Changes
Changes in v5:
- replace numab_enabled with numa_balancing_mode (Peter Zijlstra)
- make numa_balancing_enabled and numa_balancing_mode inline (Peter Zijlstra)
- use static_branch_inc/dec instead of static_branch_enable/disable (Peter Zijlstra)
- delete CONFIG_NUMA_BALANCING in task_tick_fair (Peter Zijlstra)
- reword commit, use imperative mood (Bagas Sanjaya)
- Unixbench overhead result

Changes in v4:
- code clean: add wrapper function `numa_balancing_enabled`

Changes in v3:
- Fix compile error.

Changes in v2:
- Now PR_NUMA_BALANCING support three states: enabled, disabled, default.
  enabled and disabled will ignore global setting, and default will follow
  global setting.

Gang Li (2):
  sched/numa: use static_branch_inc/dec for sched_numa_balancing
  sched/numa: add per-process numa_balancing

 Documentation/filesystems/proc.rst   |  2 ++
 fs/proc/task_mmu.c                   | 20 ++++++++++++
 include/linux/mm_types.h             |  3 ++
 include/linux/sched/numa_balancing.h | 45 ++++++++++++++++++++++++++
 include/uapi/linux/prctl.h           |  7 +++++
 kernel/fork.c                        |  4 +++
 kernel/sched/core.c                  | 26 +++++++--------
 kernel/sched/fair.c                  |  9 +++---
 kernel/sys.c                         | 47 ++++++++++++++++++++++++++++
 mm/mprotect.c                        |  6 ++--
 10 files changed, 150 insertions(+), 19 deletions(-)

-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ