[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210401193006.3392788-1-valentin.schneider@arm.com>
Date: Thu, 1 Apr 2021 20:30:03 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: linux-kernel@...r.kernel.org
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Qais Yousef <qais.yousef@....com>,
Quentin Perret <qperret@...gle.com>,
Pavan Kondeti <pkondeti@...eaurora.org>,
Rik van Riel <riel@...riel.com>,
Lingutla Chandrasekhar <clingutla@...eaurora.org>
Subject: [PATCH v4 0/3] sched/fair: load-balance vs capacity margins
Hi folks,
I split up the extra misfit patches from v3 as I'm still playing around with
those following Vincent's comments. In the meantime, I believe the first few
patches of the series can still be considered as standalone.
o Patch 1 prevents pcpu kworkers from causing group_imbalanced
o Patch 2 is an independent active balance cleanup
o Patch 3 introduces yet another margin for capacity to capacity
comparisons
The "important" one is patch 3, as it solves misfit migration issues on newer
platforms.
This is based on top of today's tip/sched/core at:
0a2b65c03e9b ("sched/topology: Remove redundant cpumask_and() in init_overlap_sched_group()")
Testing
=======
I ran my usual [1] misfit tests on
o TC2
o Juno
o HiKey960
o Dragonboard845C
o RB5
RB5 has a similar topology to Pixel4 and highlights the problem of having
two different CPU capacity values above 819 (in this case 871 and 1024):
without these patches, CPU hogs (i.e. misfit tasks) running on the "medium"
CPUs will never be upmigrated to a "big" via misfit balance.
The 0day bot reported [3] the first patch causes a ~14% regression on its
stress-ng.vm-segv testcase. I ran that testcase on:
o Ampere eMAG (arm64, 32 cores)
o 2-socket Xeon E5-2690 (x86, 40 cores)
and found at worse a -0.3% regression and at best a 2% improvement - I'm
getting nowhere near -14%.
Revisions
=========
v3 -> v4
--------
o Tore out the extra misfit patches
o Rewrote patch 1 changelog (Dietmar)
o Reused LBF_ACTIVE_BALANCE to ditch LBF_DST_PINNED active balance logic
(Dietmar)
o Collected Tested-by (Lingutla)
o Squashed capacity_greater() introduction and use (Vincent)
o Removed sched_asym_cpucapacity() static key proliferation (Vincent)
v2 -> v3
--------
o Rebased on top of latest tip/sched/core
o Added test results vs stress-ng.vm-segv
v1 -> v2
--------
o Collected Reviewed-by
o Minor comment and code cleanups
o Consolidated static key vs SD flag explanation (Dietmar)
Note to Vincent: I didn't measure the impact of adding said static key to
load_balance(); I do however believe it is a low hanging fruit. The
wrapper keeps things neat and tidy, and is also helpful for documenting
the intricacies of the static key status vs the presence of the SD flag
in a CPU's sched_domain hierarchy.
o Removed v1 patch 4 - root_domain.max_cpu_capacity is absolutely not what
I had convinced myself it was.
o Squashed capacity margin usage with removal of
group_smaller_{min, max}_capacity() (Vincent)
o Replaced v1 patch 7 with Lingutla's can_migrate_task() patch [2]
o Rewrote task_hot() modification changelog
Links
=====
[1]: https://lisa-linux-integrated-system-analysis.readthedocs.io/en/master/kernel_tests.html#lisa.tests.scheduler.misfit.StaggeredFinishes
[2]: http://lore.kernel.org/r/20210217120854.1280-1-clingutla@codeaurora.org
[3]: http://lore.kernel.org/r/20210223023004.GB25487@xsang-OptiPlex-9020
Cheers,
Valentin
Lingutla Chandrasekhar (1):
sched/fair: Ignore percpu threads for imbalance pulls
Valentin Schneider (2):
sched/fair: Clean up active balance nr_balance_failed trickery
sched/fair: Introduce a CPU capacity comparison helper
kernel/sched/fair.c | 68 +++++++++++++++++++--------------------------
1 file changed, 29 insertions(+), 39 deletions(-)
--
2.25.1
Powered by blists - more mailing lists