lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 20 Nov 2020 14:32:31 +0100 From: Vincent Guittot <vincent.guittot@...aro.org> To: Mel Gorman <mgorman@...hsingularity.net> Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org>, Valentin Schneider <valentin.schneider@....com>, Juri Lelli <juri.lelli@...hat.com>, LKML <linux-kernel@...r.kernel.org> Subject: Re: [PATCH 2/4] sched: Avoid unnecessary calculation of load imbalance at clone time On Fri, 20 Nov 2020 at 10:06, Mel Gorman <mgorman@...hsingularity.net> wrote: > > In find_idlest_group(), the load imbalance is only relevant when the group > is either overloaded or fully busy but it is calculated unconditionally. > This patch moves the imbalance calculation to the context it is required. > Technically, it is a micro-optimisation but really the benefit is avoiding > confusing one type of imbalance with another depending on the group_type > in the next patch. > > No functional change. > > Signed-off-by: Mel Gorman <mgorman@...hsingularity.net> Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org> > --- > kernel/sched/fair.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 5fbed29e4001..9aded12aaa90 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -8777,9 +8777,6 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) > .group_type = group_overloaded, > }; > > - imbalance = scale_load_down(NICE_0_LOAD) * > - (sd->imbalance_pct-100) / 100; > - > do { > int local_group; > > @@ -8833,6 +8830,11 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) > switch (local_sgs.group_type) { > case group_overloaded: > case group_fully_busy: > + > + /* Calculate allowed imbalance based on load */ > + imbalance = scale_load_down(NICE_0_LOAD) * > + (sd->imbalance_pct-100) / 100; > + > /* > * When comparing groups across NUMA domains, it's possible for > * the local domain to be very lightly loaded relative to the > -- > 2.26.2 >
Powered by blists - more mailing lists