lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 9 Oct 2014 17:30:25 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Vincent Guittot <vincent.guittot@...aro.org>
Cc:	Ingo Molnar <mingo@...nel.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Morten Rasmussen <Morten.Rasmussen@....com>,
	Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
	Russell King - ARM Linux <linux@....linux.org.uk>,
	LAK <linux-arm-kernel@...ts.infradead.org>,
	Rik van Riel <riel@...hat.com>,
	Mike Galbraith <efault@....de>,
	Nicolas Pitre <nicolas.pitre@...aro.org>,
	"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>,
	Daniel Lezcano <daniel.lezcano@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Paul Turner <pjt@...gle.com>,
	Benjamin Segall <bsegall@...gle.com>
Subject: Re: [PATCH v7 2/7] sched: move cfs task on a CPU with higher capacity

On Thu, Oct 09, 2014 at 04:59:36PM +0200, Vincent Guittot wrote:
> On 9 October 2014 13:23, Peter Zijlstra <peterz@...radead.org> wrote:
> > On Tue, Oct 07, 2014 at 02:13:32PM +0200, Vincent Guittot wrote:
> >> +++ b/kernel/sched/fair.c
> >> @@ -5896,6 +5896,18 @@ fix_small_capacity(struct sched_domain *sd, struct sched_group *group)
> >>  }
> >>
> >>  /*
> >> + * Check whether the capacity of the rq has been noticeably reduced by side
> >> + * activity. The imbalance_pct is used for the threshold.
> >> + * Return true is the capacity is reduced
> >> + */
> >> +static inline int
> >> +check_cpu_capacity(struct rq *rq, struct sched_domain *sd)
> >> +{
> >> +     return ((rq->cpu_capacity * sd->imbalance_pct) <
> >> +                             (rq->cpu_capacity_orig * 100));
> >> +}
> >> +
> >> +/*
> >>   * Group imbalance indicates (and tries to solve) the problem where balancing
> >>   * groups is inadequate due to tsk_cpus_allowed() constraints.
> >>   *
> >> @@ -6567,6 +6579,14 @@ static int need_active_balance(struct lb_env *env)
> >>                */
> >>               if ((sd->flags & SD_ASYM_PACKING) && env->src_cpu > env->dst_cpu)
> >>                       return 1;
> >> +
> >> +             /*
> >> +              * The src_cpu's capacity is reduced because of other
> >> +              * sched_class or IRQs, we trig an active balance to move the
> >> +              * task
> >> +              */
> >> +             if (check_cpu_capacity(env->src_rq, sd))
> >> +                     return 1;
> >>       }
> >
> > So does it make sense to first check if there's a better candidate at
> > all? By this time we've already iterated the current SD while trying
> > regular load balancing, so we could know this.
> 
> i'm not sure to completely catch your point.
> Normally, f_b_g and f_b_q have already looked at the best candidate
> when we call need_active_balance. And src_cpu has been elected.
> Or i have missed your point ?

Yep you did indeed miss my point.

So I've always disliked this patch for its arbitrary nature, why
unconditionally try and active balance every time there is 'some' RT/IRQ
usage, it could be all CPUs are over that arbitrary threshold and we'll
end up active balancing for no point.

So, since we've already iterated all CPUs in our domain back in
update_sd_lb_stats() we could have computed the CFS fraction:

	1024 * capacity / capacity_orig

for every cpu and collected the min/max of this. Then we can compute if
src is significantly (and there I suppose we can indeed use imb)
affected compared to others.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ