lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Feb 2018 19:26:05 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Morten Rasmussen <morten.rasmussen@....com>
Cc:     mingo@...hat.com, valentin.schneider@....com,
        dietmar.eggemann@....com, vincent.guittot@...aro.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/7] sched/fair: Avoid unnecessary balancing of
 asymmetric capacity groups

On Tue, Feb 20, 2018 at 04:33:52PM +0000, Morten Rasmussen wrote:
> On Mon, Feb 19, 2018 at 04:10:11PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 15, 2018 at 04:20:51PM +0000, Morten Rasmussen wrote:
> > > +/*
> > > + * group_similar_cpu_capacity: Returns true if the minimum capacity of the
> > > + * compared groups differ by less than 12.5%.
> > > + */
> > > +static inline bool
> > > +group_similar_cpu_capacity(struct sched_group *sg, struct sched_group *ref)
> > > +{
> > > +	long diff = sg->sgc->min_capacity - ref->sgc->min_capacity;
> > > +	long max = max(sg->sgc->min_capacity, ref->sgc->min_capacity);
> > > +
> > > +	return abs(diff) < max >> 3;
> > > +}
> > 
> > This seems a fairly random and dodgy heuristic.
> 
> I can't deny that :-)
> 
> We need to somehow figure out if we are doing asymmetric cpu capacity
> balancing or normal SMP balancing. We probably don't care about
> migrating tasks if the capacities are nearly identical. But how much is
> 'nearly'?
> 
> We could make it strictly equal as long as sgc->min_capacity is based on
> capacity_orig. If we let things like rt-pressure influence
> sgc->min_capacity, it might become a mess.

See, that is the problem, I think it this min_capacity thing is
influenced by rt-pressure and the like.

See update_cpu_capacity(), min_capacity is set after we add the RT scale
factor thingy, and then update_group_capacity() filters the min of the
whole group. The thing only ever goes down.

But this means that if a big CPU has a very high IRQ/RT load, its
capacity will dip below that of a little core and min_capacity for the
big group as a whole will appear smaller than that of the little group.

Or am I now terminally confused again?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ