[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241217094156.577262-1-arighi@nvidia.com>
Date: Tue, 17 Dec 2024 10:32:25 +0100
From: Andrea Righi <arighi@...dia.com>
To: Tejun Heo <tj@...nel.org>,
David Vernet <void@...ifault.com>,
Changwoo Min <changwoo@...lia.com>
Cc: Yury Norov <yury.norov@...il.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org
Subject: [PATCHSET v7 sched_ext/for-6.14] sched_ext: split global idle cpumask into per-NUMA cpumasks
= Overview =
As discussed during the sched_ext office hours, using a global cpumask
to keep track of the idle CPUs can be inefficient and it may not scale
really well on large NUMA systems.
Therefore, split the idle cpumask into multiple per-NUMA node cpumasks
to improve scalability and performance on such large systems.
Scalability issues seem to be more noticeable on Intel Sapphire Rapids
dual-socket architectures.
= Test =
Hardware:
- System: DGX B200
- CPUs: 224 SMT threads (112 physical cores)
- Processor: INTEL(R) XEON(R) PLATINUM 8570
- 2 NUMA nodes
Scheduler:
- scx_simple [1] (so that we can focus at the built-in idle selection
policy and not at the scheduling policy itself)
Test:
- Run a parallel kernel build `make -j $(nproc)` and measure the average
elapsed time over 10 runs:
avg time | stdev
---------+------
before: 52.431s | 2.895
after: 50.342s | 2.895
= Conclusion =
Splitting the global cpumask into multiple per-NUMA cpumasks helped to
achieve a speedup of approximately +4% with this particular architecture
and test case.
I've repeated the same test on a DGX-1 (40 physical cores, Intel Xeon
E5-2698 v4 @ 2.20GHz, 2 NUMA nodes) and I didn't observe any measurable
difference.
In general, on smaller systems, I haven't noticed any measurable
regressions or improvements with the same test (parallel kernel build)
and scheduler (scx_simple).
NOTE: splitting the global cpumask into multiple cpumasks may increase
the overhead of scx_bpf_pick_idle_cpu() or ops.select_cpu() (for
schedulers relying on the built-in CPU idle selection policy) in
presence of multiple NUMA nodes, particularly under high system load,
since we may have to access multiple cpumasks to find an idle CPU.
However, this increased overhead seems to be highly compensated by a
lower overhead when updating the idle state (__scx_update_idle()) and by
the fact that CPUs are more likely operating within their local idle
cpumask, reducing the stress on the cache coherency protocol.
= References =
[1] https://github.com/sched-ext/scx/blob/main/scheds/c/scx_simple.bpf.c
ChangeLog v6 -> v7:
- addressed some issues based on Yury's review (thanks!)
- introduced a new iterator to navigate the NUMA nodes in order of
increasing distance
ChangeLog v5 -> v6:
- refactor patch set to introduce SCX_OPS_NODE_BUILTIN_IDLE before
the per-node cpumasks
- move idle CPU selection policy to a separate file (ext_idle.c)
(no functional change, just some code shuffling)
ChangeLog v4 -> v5:
- introduce new scx_bpf_cpu_to_node() kfunc
- provide __COMPAT_*() helpers for the new kfunc's
ChangeLog v3 -> v4:
- introduce SCX_OPS_NODE_BUILTIN_IDLE to select multiple per-node
cpumasks or single flat cpumask
- introduce new kfuncs to access per-node idle cpumasks information
- use for_each_numa_hop_mask() to traverse NUMA nodes in increasing
distance
- dropped nodemask helpers (not needed anymore)
- rebase to sched_ext/for-6.14
ChangeLog v2 -> v3:
- introduce for_each_online_node_wrap()
- re-introduce cpumask_intersects() in test_and_clear_cpu_idle() (to
reduce memory writes / cache coherence pressure)
- get rid of the redundant scx_selcpu_topo_numa logic
[test results are pretty much identical, so I haven't updated them from v2]
ChangeLog v1 -> v2:
- renamed for_each_node_mask|state_from() -> for_each_node_mask|state_wrap()
- misc cpumask optimizations (thanks to Yury)
Andrea Righi (6):
sched/topology: introduce for_each_numa_hop_node() / sched_numa_hop_node()
sched_ext: Introduce SCX_OPS_NODE_BUILTIN_IDLE
sched_ext: Introduce per-node idle cpumasks
sched_ext: Get rid of the scx_selcpu_topo_numa logic
sched_ext: Introduce NUMA aware idle cpu kfunc helpers
sched_ext: Move built-in idle CPU selection policy to a separate file
MAINTAINERS | 1 +
include/linux/topology.h | 28 +-
kernel/sched/ext.c | 742 ++-------------------------
kernel/sched/ext_idle.c | 835 +++++++++++++++++++++++++++++++
kernel/sched/topology.c | 49 ++
tools/sched_ext/include/scx/common.bpf.h | 4 +
tools/sched_ext/include/scx/compat.bpf.h | 19 +
7 files changed, 984 insertions(+), 694 deletions(-)
create mode 100644 kernel/sched/ext_idle.c
Powered by blists - more mailing lists