lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 25 Nov 2020 14:02:53 -0000 From: "tip-bot2 for Mel Gorman" <tip-bot2@...utronix.de> To: linux-tip-commits@...r.kernel.org Cc: Mel Gorman <mgorman@...hsingularity.net>, "Peter Zijlstra (Intel)" <peterz@...radead.org>, Vincent Guittot <vincent.guittot@...aro.org>, x86@...nel.org, linux-kernel@...r.kernel.org Subject: [tip: sched/core] sched: Avoid unnecessary calculation of load imbalance at clone time The following commit has been merged into the sched/core branch of tip: Commit-ID: 5c339005f854fa75aa46078ad640919425658b3e Gitweb: https://git.kernel.org/tip/5c339005f854fa75aa46078ad640919425658b3e Author: Mel Gorman <mgorman@...hsingularity.net> AuthorDate: Fri, 20 Nov 2020 09:06:28 Committer: Peter Zijlstra <peterz@...radead.org> CommitterDate: Tue, 24 Nov 2020 16:47:47 +01:00 sched: Avoid unnecessary calculation of load imbalance at clone time In find_idlest_group(), the load imbalance is only relevant when the group is either overloaded or fully busy but it is calculated unconditionally. This patch moves the imbalance calculation to the context it is required. Technically, it is a micro-optimisation but really the benefit is avoiding confusing one type of imbalance with another depending on the group_type in the next patch. No functional change. Signed-off-by: Mel Gorman <mgorman@...hsingularity.net> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org> Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org> Link: https://lkml.kernel.org/r/20201120090630.3286-3-mgorman@techsingularity.net --- kernel/sched/fair.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 9d10abe..2626c6b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8777,9 +8777,6 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) .group_type = group_overloaded, }; - imbalance = scale_load_down(NICE_0_LOAD) * - (sd->imbalance_pct-100) / 100; - do { int local_group; @@ -8833,6 +8830,11 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) switch (local_sgs.group_type) { case group_overloaded: case group_fully_busy: + + /* Calculate allowed imbalance based on load */ + imbalance = scale_load_down(NICE_0_LOAD) * + (sd->imbalance_pct-100) / 100; + /* * When comparing groups across NUMA domains, it's possible for * the local domain to be very lightly loaded relative to the
Powered by blists - more mailing lists