[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190708045432.18774-1-parth@linux.ibm.com>
Date: Mon, 8 Jul 2019 10:24:30 +0530
From: Parth Shah <parth@...ux.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: peterz@...radead.org, mingo@...hat.com, vincent.guittot@...aro.org,
subhra.mazumdar@...cle.com
Subject: [RFC 0/2] Optimize the idle CPU search
When searching for an idle_sibling, scheduler first iterates to search for
an idle core and then for an idle CPU. By maintaining the idle CPU mask
while iterating through idle cores, we can mark non-idle CPUs for which
idle CPU search would not have to iterate through again. This is especially
true in a moderately load system
Optimize idle CPUs search by marking already found non idle CPUs during
idle core search. This reduces iteration count when searching for idle
CPUs, resulting in lower iteration count.
The results show that the time for `select_idle_cpu` decreases and there is
no regression on time search for `select_idle_core` and almost no
regression on schbench as well. With proper tuning schbench shows benefit
as well when idle_core search fails most times.
When doing this, rename locally used cpumask 'select_idle_mask' to
something else to use this existing mask for such optimization.
Patch set based on tip/core/core
Results
===========
IBM POWER9 system: 2-socket, 44 cores, 176 CPUs
Function latency (with tb tick):
(lower is better)
+--------------+----------+--------+-------------+--------+
| select_idle_ | Baseline | stddev | Patch | stddev |
+--------------+----------+--------+-------------+--------+
| core | 2080 | 1307 | 1975(+5.3%) | 1286 |
| cpu | 834 | 393 | 91(+89%) | 64 |
| sibling | 0.96 | 0.003 | 0.89(+7%) | 0.02 |
+--------------+----------+--------+-------------+--------+
Schbench:
- schbench -m 44 -t 1
(lower is better)
+------+----------+--------+------------+--------+
| %ile | Baseline | stddev | Patch | stddev |
+------+----------+--------+------------+--------+
| 50 | 9.9 | 2 | 10(-1.01) | 1.4 |
| 95 | 465 | 3.9 | 465(0%) | 2 |
| 99 | 561 | 24 | 483(-1.0%) | 14 |
| 99.5 | 631 | 29 | 635(-0.6%) | 32 |
| 99.9 | 801 | 41 | 763(+4.7%) | 125 |
+------+----------+--------+------------+--------+
- 44 threads spread across cores to make select_idle_core return -1 most
times
- schbench -m 44 -t 1
+-------+----------+--------+-----------+--------+
| %ile | Baseline | stddev | patch | stddev |
+-------+----------+--------+-----------+--------+
| 50 | 10 | 9 | 12(-20%) | 1 |
| 95 | 468 | 3 | 31(+93%) | 1 |
| 99 | 577 | 16 | 477(+17%) | 38 |
| 99.95 | 647 | 26 | 482(+25%) | 2 |
| 99.99 | 835 | 61 | 492(+41%) | 2 |
+-------+----------+--------+-----------+--------+
Hackbench:
- 44 threads spread across cores to make select_idle_core return -1 most
times
- perf bench sched messaging -g 1 -l 100000
(lower is better)
+----------+--------+--------------+--------+
| Baseline | stddev | patch | stddev |
+----------+--------+--------------+--------+
| 16.107 | 0.62 | 16.02(+0.5%) | 0.32 |
+----------+--------+--------------+--------+
Series:
- Patch 01: Rename select_idle_mask to reuse the name in next patch
- Patch 02: Optimize the wakeup fast path
Parth Shah (2):
sched/fair: Rename select_idle_mask to iterator_mask
sched/fair: Optimize idle CPU search
kernel/sched/core.c | 3 +++
kernel/sched/fair.c | 15 ++++++++++-----
2 files changed, 13 insertions(+), 5 deletions(-)
--
2.17.1
Powered by blists - more mailing lists