[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1686554037.git.yu.c.chen@intel.com>
Date: Tue, 13 Jun 2023 00:17:53 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>
Cc: Tim Chen <tim.c.chen@...el.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Dietmar Eggemann <dietmar.eggemann@....com>,
K Prateek Nayak <kprateek.nayak@....com>,
Abel Wu <wuyun.abel@...edance.com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
Len Brown <len.brown@...el.com>,
Chen Yu <yu.chen.surf@...il.com>,
Yicong Yang <yangyicong@...ilicon.com>,
linux-kernel@...r.kernel.org, Chen Yu <yu.c.chen@...el.com>
Subject: [RFC PATCH 0/4] Limit the scan depth to find the busiest sched group during newidle balance
Hi,
This is an attempt to reduce the cost of newidle balance which is
found to occupy noticeable CPU cycles on some high-core count systems.
For example, by running sqlite on Intel Sapphire Rapids, which has
2 x 56C/112T = 224 CPUs:
6.69% 0.09% sqlite3 [kernel.kallsyms] [k] newidle_balance
5.39% 4.71% sqlite3 [kernel.kallsyms] [k] update_sd_lb_stats
The main idea comes from the following question raised by Tim:
Do we always have to find the busiest group and pull from it? Would
a relatively busy group be enough?
The proposal ILB_UTIL mainly adjusts the newidle balance scan depth
within the current sched domain, based on the system utilization in
this domain. The more spare time there is in the domain, the more time
each newidle balance can spend on scanning for a busy group. Although
the newidle balance has per domain max_newidle_lb_cost to decide
whether to launch the balance or not, the ILB_UTIL provides a smaller
granularity to decide how many groups each newidle balance can scan.
patch 1/4 is code cleanup.
patch 2/4 is to introduce a new variable in sched domain to indicate the
number of groups, and will be used by patch 3 and patch 4.
patch 3/4 is to calculate the scan depth in each periodic load balance.
patch 4/4 is to limit the scan depth based on the result of patch 3,
and the depth will be used by newidle_balance()->
find_busiest_group() -> update_sd_lb_stats()
According to the test result, netperf/tbench shows some improvements
when the system is underloaded, while no noticeable difference from
hackbench/schbench. While I'm trying to run more benchmarks including
some macro-benchmarks, I send this draft patch out and seek for suggestion
from the community if this is the right thing to do and if we are in the
right direction.
[We also have other wild ideas like sorting the groups by their load
in the periodic load balance, later newidle_balance() can fetch the
corresponding group in O(1). And this change seems to get improvement
too according to the test result].
Any comments would be appreciated.
Chen Yu (4):
sched/fair: Extract the function to get the sd_llc_shared
sched/topology: Introduce nr_groups in sched_domain to indicate the
number of groups
sched/fair: Calculate the scan depth for idle balance based on system
utilization
sched/fair: Throttle the busiest group scanning in idle load balance
include/linux/sched/topology.h | 5 +++
kernel/sched/fair.c | 74 +++++++++++++++++++++++++++++-----
kernel/sched/features.h | 1 +
kernel/sched/topology.c | 10 ++++-
4 files changed, 79 insertions(+), 11 deletions(-)
--
2.25.1
Powered by blists - more mailing lists