[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230412140701.58337-1-ligang.bdlg@bytedance.com>
Date: Wed, 12 Apr 2023 22:06:58 +0800
From: Gang Li <ligang.bdlg@...edance.com>
To: John Hubbard <jhubbard@...dia.com>,
Jonathan Corbet <corbet@....net>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>
Cc: linux-api@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-doc@...r.kernel.org,
Gang Li <ligang.bdlg@...edance.com>
Subject: [PATCH v6 0/2] sched/numa: add per-process numa_balancing
# Introduce
Add PR_NUMA_BALANCING in prctl.
A large number of page faults will cause performance loss when numa
balancing is performing. Thus those processes which care about worst-case
performance need numa balancing disabled. Others, on the contrary, allow a
temporary performance loss in exchange for higher average performance, so
enable numa balancing is better for them.
Numa balancing can only be controlled globally by
/proc/sys/kernel/numa_balancing. Due to the above case, we want to
disable/enable numa_balancing per-process instead.
Set per-process numa balancing:
prctl(PR_NUMA_BALANCING, PR_SET_NUMA_BALANCING_DISABLE); //disable
prctl(PR_NUMA_BALANCING, PR_SET_NUMA_BALANCING_ENABLE); //enable
prctl(PR_NUMA_BALANCING, PR_SET_NUMA_BALANCING_DEFAULT); //follow global
Get numa_balancing state:
prctl(PR_NUMA_BALANCING, PR_GET_NUMA_BALANCING, &ret);
cat /proc/<pid>/status | grep NumaB_mode
# Unixbench
This is overhead of this patch, not performance improvement.
+-------------------+----------+
| NAME | OVERHEAD |
+-------------------+----------+
| Pipe_Throughput | 0.98% |
| Context_Switching | -0.96% |
| Process_Creation | 1.18% |
+-------------------+----------+
# Changes
Changes in v6:
- rebase on top of next-20230411
- run Unixbench on physical machine
- acked by John Hubbard <jhubbard@...dia.com>
Changes in v5:
- replace numab_enabled with numa_balancing_mode (Peter Zijlstra)
- make numa_balancing_enabled and numa_balancing_mode inline (Peter Zijlstra)
- use static_branch_inc/dec instead of static_branch_enable/disable (Peter Zijlstra)
- delete CONFIG_NUMA_BALANCING in task_tick_fair (Peter Zijlstra)
- reword commit, use imperative mood (Bagas Sanjaya)
- Unixbench overhead result
Changes in v4:
- code clean: add wrapper function `numa_balancing_enabled`
Changes in v3:
- Fix compile error.
Changes in v2:
- Now PR_NUMA_BALANCING support three states: enabled, disabled, default.
enabled and disabled will ignore global setting, and default will follow
global setting.
Gang Li (2):
sched/numa: use static_branch_inc/dec for sched_numa_balancing
sched/numa: add per-process numa_balancing
Documentation/filesystems/proc.rst | 2 ++
fs/proc/task_mmu.c | 20 ++++++++++++
include/linux/mm_types.h | 3 ++
include/linux/sched/numa_balancing.h | 45 ++++++++++++++++++++++++++
include/uapi/linux/prctl.h | 8 +++++
kernel/fork.c | 4 +++
kernel/sched/core.c | 26 +++++++--------
kernel/sched/fair.c | 9 +++---
kernel/sys.c | 47 ++++++++++++++++++++++++++++
mm/mprotect.c | 6 ++--
10 files changed, 151 insertions(+), 19 deletions(-)
--
2.20.1
Powered by blists - more mailing lists